Alex, K., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, & L. Bottou (Eds.), NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems (Vol. 1, pp. 1097-1105). Curran Associates. http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
Calkins, S. (1983). The new Merger Guidelines and the Herfindahl-Hirschman Index. California Law Review, 71(2), 402-429. https://doi.org/10.2307/3480160
Chen, L., & Lee, C. M. (2017). Predicting audience’s laughter using convolutional neural network. arXiv. https://arxiv.org/abs/1702.02584v2
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv. https://arxiv.org/abs/1810.04805v2
Hirschman, A. O. (1964). The paternity of an index. The American Economic Review, 54(5), 761.
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
Joachims, T. (1998). Text categorization with support vector machines: Learning with many relevant features. In C. Nédellec & C. Rouveirol (Eds.), Machine learning: ECML-98: 10th European Conference on Machine Learning Chemnitz, Germany, April 21-23, 1998 proceedings (pp. 137-142). Springer. https://doi.org/10.1007/BFb0026683
Johnson, R., & Zhang, T. (2015). Effective use of word order for text categorization with convolutional neural networks. In R. Mihalcea, J. Chai, & A. Sarkar (Eds.), Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 103-112). Association for Computational Linguistics. https://doi.org/10.3115/v1/N15-1011
Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of tricks for efficient text classification. arXiv. https://arxiv.org/abs/1607.01759v3
Lai, S., Xu, L., Liu, K., & Zhao, J. (2015). Recurrent convolutional neural networks for text classification. In AAAI’15: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (pp. 2267-2273). Association for the Advancement of Artificial Intelligence.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436-444. https://doi.org/10.1038/nature14539.
Lewis, D. D., Yang, Y., Rose, T. G., & Li, F. (2004). RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5, 361-397.
Liston-Heyes, C., & Pilkington, A. (2004). Inventive concentration in the production of green technology: A comparative analysis of fuel cell patents. Science and Public Policy, 31(1), 15-25. https://doi.org/10.3152/147154304781780190
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv. https://arxiv.org/abs/1301.3781v3
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, & K. Q. Weinberger (Eds.), NIPS’13: Proceedings of the 26th International Conference on Neural Information Processing Systems (Vol. 2, 3111-3119). Curran Associates.
Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. In EMNLP ‘02: Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing (Vol. 10, pp. 79-86). Association for Computational Linguistics. https://doi.org/10.3115/1118693.1118704
Russakovsky, O. Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Li, F.-F. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 211-252. https://doi.org/10.1007/s11263-015-0816-y
Saif, H., Fernández, M., He, Y., & Alani, H. (2013). Evaluation datasets for Twitter sentiment analysis: A survey and a new dataset, the STS-Gold. In C Battaglino, C. Bosco, E. Cambria, R. Damiano, V. Patti, & P. Rosso (Eds.), Proceedings of the First International Workshop on Emotion and Sentiment in Social and Expressive Media: Approaches and perspectives from AI (pp. 9-21). http://ceur-ws.org/Vol-1096/proceedings.pdf
Salton, G. (1989). Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer. Addison-Wesley Longman Publishing.
Sebastiani, F. (2002). Machine Learning in Automated Text Categorization. ACM Computing Surveys, 34(1), 1-47. https://doi.org/10.1145/505282.505283
Simpson, E. H. (1949). Measurement of diversity. Nature, 163 , 688. https://doi.org/10.1038/163688a0
Sun, Y., Wang, S., Li, Y., Feng, S., Chen, X., Zhang, H., Tian, X., Zhu, D., Tian, H., & Wu, H. (2019). ERNIE: Enhanced representation through knowledge integration. arXiv. https://arxiv.org/abs/1907.12412v2
Sun, Y., Wang, S., Li, Y., Feng, S., Tian, H., Wu, H., & Wang, H. (2019). ERNIE 2.0: A continual pre-training framework for language understanding. arXiv. http://arxiv.org/abs/1907.12412
Tseng, Y.-H., & Teahan, W. J. (2004). Verifying a chinese collection for text categorization. In K. Jarvelin, J. Allen, P. Bruza, & M. Sanderson (Eds.), Proceedings of Sheffield SIGIR - Twenty-Seventh Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 556-557). ACM Press.
Turney, P. D. (2002). Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In P. Isabelle (Chair), ACL ‘02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (pp. 417-424). Association for Computational Linguistics. https://doi.org/10.3115/1073083.1073153
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30. Neural Information Processing Systems Foundation. https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf
Witten, I. H., Frank, E., & Hall, M. A. (2011). Data mining: Practical machine learning tools and techniques (3rd ed.). Morgan Kaufmann Publishers.
Yan, L., Zheng, Y., & Cao, J. (2018). Few-shot learning for short text classification. Multimedia Tools and Applications, 77(22), 29799.29810. https://doi.org/10.1007/s11042-018-5772-4
Yang, Y., & Liu, X. (1999). A re-examination of text categorization methods. In F. Gey, M. Hearst, & R. Tong (Chairs), SIGIR ‘99: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 42-49). Association for Computing Machinery. https://doi.org/10.1145/312624.312647
Yang, Z., Dai, Z., Yang, Y., Carbonell, J. G., Salakhutdinov, R., & Le, Q. V. (2019). XLNet: Generalized autoregressive pretraining for language understanding [Paper presentation]. 33rd Conference on Neural Information Processing Systems, Vancouver, Canada. https://papers.nips.cc/paper/8812-xlnet-generalized-autoregressive-pretraining-for-language-understanding.pdf
Zhang, X., Zhao, J., & LeCun, Y. (2015). Character-level convolutional networks for text classification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett (Eds.), NIPS’15: Proceedings of the 28th International Conference on Neural Information Processing Systems (Vol. 1, pp. 649-657). MIT Press.
Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., & Liu, Q. (2019). ERNIE: Enhanced language representation with informative entities. In A. Korhonen, D. Traum, & L. Màrquez (Eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 1441-1451). Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1139