ebkit-text-stroke-width: 0px; "> [1] L. Deng and X. Li, “Machine learning paradigms for speech recognition:
An overview,” IEEE Transactions on Audio, Speech, and Language
Processing, vol. 21, no. 5, pp. 1060–1089, 2013. [2] L. Deng, D. Yu et al., “Deep learning: methods and applications,”
Foundations and TrendsR in Signal Processing, vol. 7, no. 3–4, pp. 197–
387, 2014. [3] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning.
MIT press Cambridge, 2016, vol. 1. [4] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A.
Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., “Deep neural
networks for acoustic modeling in speech recognition: The shared views
of four research groups,” IEEE Signal processing magazine, vol. 29, no.
6, pp. 82–97, 2012. [5] J. Heaton, N. Polson, and J. H. Witte, “Deep learning for finance: deep
portfolios,” Applied Stochastic Models in Business and Industry, vol. 33,
no. 1, pp. 3–12, 2017. [6] S. Min, B. Lee, and S. Yoon, “Deep learning in bioinformatics,”
Briefings in bioinformatics, vol. 18, no. 5, pp. 851–869, 2017.
-size-adjust: auto; -webkit-text-strok[7] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep
learning for healthcare: review, opportunities and challenges,” Briefings
in bioinformatics, 2017. [8] P. Badjatiya, S. Gupta, M. Gupta, and V. Varma, “Deep learning for
hate speech detection in tweets,” in Proceedings of the 26th
International Conference on World Wide Web Companion. International
World Wide Web Conferences Steering Committee, 2017, pp. 759–760. [9] N. G. Polson and V. O. Sokolov, “Deep learning for short-term traffic
flow prediction,” Transportation Research Part C: Emerging
Technologies, vol. 79, pp. 1–17, 2017. [10] S. Kubrick and A. C. Clarke, “A space odyssey,” Hollywood, California,
USA: Metro-Goldwyn-Mayer, 2001. [11] K. N. Rosenfeld, “Terminator to avatar: A postmodern shift,” 2010. [12] C. E. Shannon, “Xxii. programming a computer for playing chess,” The
London,
Edinburgh, and Dublin Philosophical Magazine and Journal of Science,
vol. 41, no. 314, pp. 256–275, 1950. [13] J. McCarthy and P. J. Hayes, “Some philosophical problems from the
standpoint of artificial intelligence,” in Readings in artificial
intelligence. Elsevier, 1981, pp. 431–450. [14] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den
Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M.
Lanctotet al., “Mastering the game of go with deep neural networks and
tree search,” nature, vol. 529, no. 7587, p. 484, 2016. [15] “Deep Neural Networks Architecture functional description,” https://cdn.edureka.co/blog/wp-content/uploads/2017/05/
Deep-Neural-Network-What-is-Deep-Learning-Edureka.png, accessed:
2018-0710. [16] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT
Press, 2016, http://www.deeplearningbook.org. [17] “Deep Neural Networks historical timeline,”
https://towardsdatascience.com/ a-weird-introduction-to-deep-learning-
7828803693b0, accessed: 2018-07-10. [18] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath,
“Deep reinforcement learning: A brief survey,” IEEE Signal Processing
Magazine, vol. 34, no. 6, pp. 26–38, 2017.-size-adjust: auto; -webkit-text-strok[7] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep
learning for healthcare: review, opportunities and challenges,” Briefings
in bioinformatics, 2017. [8] P. Badjatiya, S. Gupta, M. Gupta, and V. Varma, “Deep learning for
hate speech detection in tweets,” in Proceedings of the 26th
International Conference on World Wide Web Companion. International
World Wide Web Conferences Steering Committee, 2017, pp. 759–760. [9] N. G. Polson and V. O. Sokolov, “Deep learning for short-term traffic
flow prediction,” Transportation Research Part C: Emerging
Technologies, vol. 79, pp. 1–17, 2017. [10] S. Kubrick and A. C. Clarke, “A space odyssey,” Hollywood, California,
USA: Metro-Goldwyn-Mayer, 2001. [11] K. N. Rosenfeld, “Terminator to avatar: A postmodern shift,” 2010. [12] C. E. Shannon, “Xxii. programming a computer for playing chess,” The
London,
Edinburgh, and Dublin Philosophical Magazine and Journal of Science,
vol. 41, no. 314, pp. 256–275, 1950. [13] J. McCarthy and P. J. Hayes, “Some philosophical problems from the
standpoint of artificial intelligence,” in Readings in artificial
intelligence. Elsevier, 1981, pp. 431–450. [14] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den
Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M.
Lanctotet al., “Mastering the game of go with deep neural networks and
tree search,” nature, vol. 529, no. 7587, p. 484, 2016. [15] “Deep Neural Networks Architecture functional description,” https://cdn.edureka.co/blog/wp-content/uploads/2017/05/
Deep-Neural-Network-What-is-Deep-Learning-Edureka.png, accessed:
2018-0710. [16] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT
Press, 2016, http://www.deeplearningbook.org. [17] “Deep Neural Networks historical timeline,”
https://towardsdatascience.com/ a-weird-introduction-to-deep-learning-
7828803693b0, accessed: 2018-07-10. [18] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath,
“Deep reinforcement learning: A brief survey,” IEEE Signal Processing
Magazine, vol. 34, no. 6, pp. 26–38, 2017.
ize-adjust: auto; -webkit-text-stroke-[19] S. Hong, S. Kwak, and B. Han, “Weakly supervised learning with deep
convolutional neural networks for semantic segmentation:
Understanding semantic layout of images with minimum human
supervision,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 39–
49, 2017. [20] X. Chu, W. Ouyang, H. Li, and X. Wang, “Structured feature learning
for pose estimation,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2016, pp. 4715–4723. [21] C. Deng, X. Liu, C. Li, and D. Tao, “Active multi-kernel domain
adaptation for hyperspectral image classification,” Pattern Recognition,
vol. 77, pp. 306–315, 2018. [22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S.
Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in
Advances in neural information processing systems, 2014, pp. 2672–
2680. [23] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and
A. A.
Bharath, “Generative adversarial networks: An overview,” IEEE Signal
Processing Magazine, vol. 35, no. 1, pp. 53–65, 2018. [24] C. Vondrick, H. Pirsiavash, and A. Torralba, “Generating videos with
scene dynamics,” in Advances In Neural Information Processing
Systems, 2016, pp. 613–621. [25] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum, “Learning a
probabilistic latent space of object shapes via 3d generative-adversarial
modeling,” in Advances in Neural Information Processing Systems,
2016, pp. 82–90. [26] K. Schawinski, C. Zhang, H. Zhang, L. Fowler, and G. K. Santhanam,
“Generative adversarial networks recover features in astrophysical
images of galaxies beyond the deconvolution limit,” Monthly Notices of
the Royal Astronomical Society: Letters, vol. 467, no. 1, pp. L110–L114,
2017. [27] A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation
Learning withDeep Convolutional Generative Adversarial Networks,”
ArXiv e-prints, Nov. 2015.