dc.description.abstract | The purpose of our study was to find out how well GPT-3 models answer software engineering questions compared to human answers in terms of language and context. The results we got show that GPT-3 models gave answers that were clearer, shorter, easier to read, and shared more words than people did. The examples given by the models, on the other hand, were not good enough. When it comes to language, our study also showed that GPT-3 models can give answers with different polarity, word count, and code length. Additionally, there has been a general decrease in interactions on StackOverflow, but that can’t be linked to the use of ChatGPT at this time. But if this is the effect, we will have a big effect on the knowledge network and how people share knowledge in the future. | en_US |