Data Analytics in Bioinformatics. Группа авторов
0.8366730
1.11 Conclusion & Future Scope
The contribution of AI has been significant from the past six decades. It consisted of the sub-domain, Machine Learning that has also made its mark in the field of research. Its main constituent Supervised Learning is highlighted in this chapter along with its different sub techniques such as a k-nearest neighbor algorithm, classification, regression, decision trees, etc. This chapter also depicts the analysis of a popular dataset of Heart Disease [41] along with its numerical interpretations. The implementation was done on python (Google Colab). A small introductory part of unsupervised learning along with reinforcement learning is also depicted in this chapter.
In the future, the research will be continued in the field of supervised learning (i.e. Logistic Regression and Neural Networks) and its subfields and will try to find out more similarities that may enhance the research perspective.
References
1. Guo, J., He, H., He, T., Lausen, L., Li, M., Lin, H., Zhang, A., Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing. J. Mach. Learn. Res., 21, 23, 1–7, 2020.
2. Abas, Z.A., Rahman, A.F.N.A., Pramudya, G., Wee, S.Y., Kasmin, F., Yusof, N., Abidin, Z.Z., Analytics: A Review Of Current Trends, Future Application And Challenges. Compusoft, 9, 1, 3560–3565, 2020.
3. Géron, A., Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, O’Reilly Media, United State of America, 2019.
4. Alshemali, B. and Kalita, J., Improving the reliability of deep neural networks in NLP: A review. Knowl.-Based Syst., 191, 105210, 2020.
5. Klaine, P.V., Imran, M.A., Onireti, O., Souza, R.D., A survey of machine learning techniques applied to self-organizing cellular networks. IEEE Commun. Surv. Tut., 19, 4, 2392–2431, 2017.
6. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Kudlur, M., Tensorflow: A system for large-scale machine learning, in: 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp. 265–283, 2016.
7. Alpaydin, E., Introduction to machine learning, MIT Press, United Kingdom, 2020.
8. Larranaga, P., Calvo, B., Santana, R., Bielza, C., Galdiano, J., Inza, I., Robles, V., Machine learning in bioinformatics. Briefings Bioinf., 7, 1, 86–112, 2006.
9. Almomani, A., Gupta, B.B., Atawneh, S., Meulenberg, A., Almomani, E., A survey of phishing email filtering techniques. IEEE Commun. Surv. Tut., 15, 4, 2070–2090, 2013.
10. Kononenko, I., Machine learning for medical diagnosis: History, state of the art and perspective. Artif. Intell. Med., 23, 1, 89–109, 2001.
11. Kotsiantis, S.B., Zaharakis, I., Pintelas, P., Supervised machine learning: A review of classification techniques, in: Emerging Artificial Intelligence Applications in Computer Engineering, vol. 160, pp. 3–24, 2007.
12. Freitag, D., Machine learning for information extraction in informal domains. Mach. Learn., 39, 2–3, 169–202, 2000.
13. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., Improving language understanding by generative pre-training, URL https://s3-us-west-2.amazonaws.com/openai-assets/researchcovers/languageunsupervised/languageunderstanding paper.pdf, 2018.
14. Garcia, V. and Bruna, J., Few-shot learning with graph neural networks, In Proceedings of the International Conference on Learning Representations (ICLR), 3, 1–13, 2018.
15. Miyato, T., Maeda, S.I., Koyama, M., Ishii, S., Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 41, 8, 1979–1993, 2018.
16. Tarvainen, A. and Valpola, H., Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results, in: Advances in Neural Information Processing Systems, pp. 1195–1204, 2017.
17. Baldi, P., Autoencoders, unsupervised learning, and deep architectures, in: Proceedings of ICML Workshop on Unsupervised and Transfer Learning, 2012, June, pp. 37–49.
18. Srivastava, N., Mansimov, E., Salakhudinov, R., Unsupervised learning of video representations using lstms, in: International Conference on Machine Learning, 2015, June, pp. 843–852.
19. Niebles, J.C., Wang, H., Fei-Fei, L., Unsupervised learning of human action categories using spatial-temporal words. Int. J. Comput. Vision, 79, 3, 299–318, 2008.
20. Lee, H., Grosse, R., Ranganath, R., Ng, A.Y., Unsupervised learning of hierarchical representations with convolutional deep belief networks. Commun. ACM, 54, 10, 95–103, 2011.
21. Memisevic, R. and Hinton, G., Unsupervised learning of image transformations, in: 2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007, June, IEEE, pp. 1–8.
22. Dy, J.G. and Brodley, C.E., Feature selection for unsupervised learning. J. Mach. Learn. Res., 5, Aug, 845–889, 2004.
23. Kim, Y., Street, W.N., Menczer, F., Feature selection in unsupervised learning via evolutionary search, in: Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2000, August, pp. 365–369.
24. Shi, Y. and Sha, F., Information-theoretical learning of discriminative clusters for unsupervised domain adaptation, Proceedings of the International Conference on Machine Learning, 1, pp. 1079–1086, 2012.
25. Balakrishnan, P.S., Cooper, M.C., Jacob, V.S., Lewis, P.A., A study of the classification capabilities of neural networks using unsupervised learning: A comparison with K-means clustering. Psychometrika, 59, 4, 509–525, 1994.
26. Pedrycz, W. and Waletzky, J., Fuzzy clustering with partial supervision. IEEE Trans. Syst. Man Cybern. Part B (Cybern.), 27, 5, 787–795, 1997.
27. Andreae, J.H., The future of associative learning, in: Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems, 1995, November, IEEE, pp. 194–197.
28. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M., Playing atari with deep reinforcement learning, in: Neural information Processing System (NIPS) ’13 Workshop on Deep Learning, 1, pp. 1–9, 2013.
29. Abbeel, P. and Ng, A.Y., Apprenticeship learning via inverse reinforcement learning, in: Proceedings of the Twenty-First International Conference on Machine learning, 2004, July, p. 1.
30. Wiering, M. and Van Otterlo, M., Reinforcement learning. Adapt. Learn. Optim., 12, 3, 2012.
31. Ziebart, B.D., Maas, A.L., Bagnell, J.A., Dey, A.K., Maximum entropy inverse reinforcement learning, in: Aaai, vol. 8, pp. 1433–1438, 2008.
32. Rothkopf, C.A. and Dimitrakakis, C., Preference elicitation and inverse reinforcement learning, in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2011, September, Springer, Berlin, Heidelberg, pp. 34–48.
33. Anderson, M.J., Carl Linnaeus: Father of Classification, Enslow Publishing, LLC, New York, 2009.
34.