dc.description.abstract | The development of accurate sentiment analysis and aspect detection for the Bengali language is crucial due to the rise of Bengali language usage in digital media. Sentiment analysis and aspect detection are essential tasks in Natural Language Processing (NLP) as they allow us to extract meaningful information from textual data. In this study, we explore the performance of advanced NLP techniques in Bengali text classification tasks, specifically sentiment analysis and aspect detection. To achieve this, we compare the performance of the Bidirectional Encoder Representations from Transformers (BERT) model with Bi- LSTM, LSTM, and GRU models. We collect two Bengali datasets and preprocess them to be compatible with the input format required by BERT. The model is then trained and tested on the preprocessed data. Our results show that the BERT model outperforms the traditional Bi-LSTM, LSTM, and GRU models with a 92.5% accuracy in sentiment classification and 90.4% accuracy in aspect detection. The precision, recall, and F1- score values further support the superior performance of BERT. Our study highlights the effectiveness of using advanced NLP techniques such as BERT in text classification tasks for the Bengali language. This opens up new avenues for future work in the field of Bengali NLP, specifically | en_US |