Data Analytics in Bioinformatics. Группа авторов

Data Analytics in Bioinformatics - Группа авторов


Скачать книгу
Sociol. Rev., 23, 6, 652–660, 1958.

      35. Zaffalon, M. and Miranda, E., Conservative inference rule for uncertain reasoning under incompleteness. J. Artif. Intell. Res., 34, 757–821, 2009.

      36. Sathya, R. and Abraham, A., Comparison of supervised and unsupervised learning algorithms for pattern classification. Int. J. Adv. Res. Artif. Intell., 2, 2, 34–38, 2013.

      37. Tao, D., Li, X., Hu, W., Maybank, S., Wu, X., Supervised tensor learning, in: Fifth IEEE International Conference on Data Mining (ICDM’05), 2005, November, IEEE, p. 8.

      38. Krawczyk, B., Woźniak, M., Schaefer, G., Cost-sensitive decision tree ensembles for effective imbalanced classification. Appl. Soft Comput., 14, 554–562, 2014.

      39. Wang, B., Tu, Z., Tsotsos, J.K., Dynamic label propagation for semi-supervised multi-class multi-label classification, in: Proceedings of the IEEE International Conference on Computer Vision, pp. 425–432, 2013.

      40. Valizadegan, H., Jin, R., Jain, A.K., Semi-supervised boosting for multi-class classification, in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2008, September, Springer, Berlin, Heidelberg, pp. 522–537.

      41. Lapp, D., Heart Disease Dataset, retrieved from https://www.kaggle.com/johnsmith88/heart-disease-dataset.

      42. Yildiz, B., Bilbao, J.I., Sproul, A.B., A review and analysis of regression and machine learning models on commercial building electricity load forecasting. Renewable Sustainable Energy Rev., 73, 1104–1122, 2017.

      43. Singh, Y., Kaur, A., Malhotra, R., Comparative analysis of regression and machine learning methods for predicting fault proneness models. Int. J. Comput. Appl. Technol., 35, 2–4, 183–193, 2009.

      45. Razi, M.A. and Athappilly, K., A comparative predictive analysis of neural networks (NNs), nonlinear regression and classification and regression tree (CART) models. Expert Syst. Appl., 29, 1, 65–74, 2005.

      46. Lu, Q. and Lund, R.B., Simple linear regression with multiple level shifts. Can. J. Stat., 35, 3, 447–458, 2007.

      47. Tranmer, M. and Elliot, M., Multiple linear regression, The Cathie Marsh Centre for Census and Survey Research (CCSR), vol. 5, pp. 30–35, 2008.

      48. Kutner, M.H., Nachtsheim, C.J., Neter, J., Li, W., Applied Linear Statistical Models, vol. 5, McGraw-Hill Irwin, New York, 2005.

      49. Noorossana, R., Eyvazian, M., Amiri, A., Mahmoud, M.A., Statistical monitoring of multivariate multiple linear regression profiles in phase I with calibration application. Qual. Reliab. Eng. Int., 26, 3, 291–303, 2010.

      50. Ngo, T.H.D. and La Puente, C.A., The steps to follow in a multiple regression analysis, in: SAS Global Forum, 2012, April, vol. 2012, pp. 1–12.

      51. Hargreaves, B.R. and McWilliams, T.P., Polynomial trendline function flaws in Microsoft Excel. Comput. Stat. Data Anal., 54, 4, 1190–1196, 2010.

      52. Dethlefsen, C. and Lundbye-Christensen, S., Formulating State Space Models in R With Focus on Longitudinal Regression Models, Department of Mathematical Sciences, Aalborg University, Denmark, 2005.

      53. Brown, A.M., A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet. Comput. Methods Programs Biomed., 65, 3, 191–200, 2001.

      54. Tripepi, G., Jager, K.J., Dekker, F.W., Zoccali, C., Linear and logistic regression analysis. Kidney Int., 73, 7, 806–810, 2008.

      55. Press, S.J. and Wilson, S., Choosing between logistic regression and discriminant analysis. J. Am. Stat. Assoc., 73, 364, 699–705, 1978.

      56. Menard, S., Applied Logistic Regression Analysis, vol. 106, Sage, United State of America, 2002.

      57. Demartines, P. and Hérault, J., Curvilinear component analysis: A self-organizing neural network for nonlinear mapping of data sets. IEEE Trans. Neural Networks, 8, 1, 148–154, 1997.

      58. Max, T.A. and Burkhart, H.E., Segmented polynomial regression applied to taper equations. For. Sci., 22, 3, 283–289, 1976.

      59. Bendel, R.B. and Afifi, A.A., Comparison of stopping rules in forward “stepwise” regression. J. Am. Stat. Assoc., 72, 357, 46–53, 1977.

      60. Mahmood, Z. and Khan, S., On the use of k-fold cross-validation to choose cutoff values and assess the performance of predictive models in stepwise regression. Int. J. Biostat., 5, 1, 1–21, 2009.

      61. Hoerl, A.E., Kannard, R.W., Baldwin, K.F., Ridge regression: Some simulations. Commun. Stat.-Theory Methods, 4, 2, 105–123, 1975.

      63. Hans, C., Bayesian Lasso Regression. Biometrika, 96, 4, 835–845, 2009.

      64. Zou, H. and Hastie, T., Regularization and variable selection via the elastic net. J. R. Stat. Soc.: Series B (Stat. Methodol.), 67, 2, 301–320, 2005.

      65. Ogutu, J.O., Schulz-Streeck, T., Piepho, H.P., Genomic selection using regularized linear regression models: ridge regression, lasso, elastic net and their extensions, in: BMC Proceedings, 2012, December, BioMed Central, Vol. 6, No. S2, p. S10.

      66. Brieuc, M.S., Waters, C.D., Drinan, D.P., Naish, K.A., A practical introduction to Random Forest for genetic association studies in ecology and evolution. Mol. Ecol. Resour., 18, 4, 755–766, 2018.

      67. Jurka, T.P., Collingwood, L., Boydstun, A.E., Grossman, E., van Atteveldt, W., RTextTools: A Supervised Learning Package for Text Classification. R J., 5, 1, 6–12, 2013.

      68. Criminisi, A., Shotton, J., Konukoglu, E., Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends® Comput. Graph. Vis., 7, 2–3, 81–227, 2012.

      69. Shi, T. and Horvath, S., Unsupervised learning with random forest predictors. J. Comput. Graph. Stat., 15, 1, 118–138, 2006.

      70. Settouti, N., Daho, M.E.H., Lazouni, M.E.A., Chikh, M.A., Random forest in semi-supervised learning (Co-Forest), in: 2013 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA), 2013, May, IEEE, pp. 326–329.

      71. Gu, L., Zheng, Y., Bise, R., Sato, I., Imanishi, N., Aiso, S., Semi-supervised learning for biomedical image segmentation via forest oriented super pixels (voxels), in: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2017, September, Springer, Cham, pp. 702–710.

      72. Fiaschi, L., Köthe, U., Nair, R., Hamprecht, F.A., Learning to count with regression forest and structured labels, in: Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 2012, November, IEEE, pp. 2685–2688.

      73. Welinder, P., Branson, S., Perona, P., Belongie, S.J., The multidimensional wisdom of crowds, in: Advances in Neural Information Processing Systems, pp. 2424–2432, 2010.

      74. Oza, N.C., Online bagging and boosting, in: 2005 IEEE international conference on systems, man and cybernetics, 2005, October, vol. 3, IEEE, pp. 2340–2345.

      75. Wang, G., Hao, J., Ma, J., Jiang, H., A comparative assessment of ensemble learning for credit scoring. Expert Syst. Appl., 38, 1, 223–230, 2011.

      76. Yerima, S.Y., Sezer, S., Muttik, I., High


Скачать книгу