
Artificial intelligence will affect all areas of intellectual life, and is already affecting academic publishing. I am personally very interested in AI, especially how it can support language teaching and learning. I’ve been at work on a review of the use of conversational AI applications for language learning and teaching…I hope I finish it someday :-).Through doing my review, the main thing I have found is that the amount of published research is ballooning quickly. Another thing I have found is that there are so many reviews of research already out there! I suspect that one reason for the ballooning research quantity and the many reviews that have been published so quickly is…AI. People are using AI to support (and hopefully not supplant) their writing process. I assume most people reading this post have already experienced the wonders of AI, and current models as of this writing (e.g., ChatGPT 4.5, Claude 3.7) are far better than the first generation of models that appeared. In this post, I want to write about some of these well-known models as well as some other AI research aids that you might not be aware of.
First, let’s talk about the “deep research” AI models that now exist on Gemini and ChatGPT (Claude still does not have internet access as of this writing, so it does not have a deep research model–as per the intention of its creators). For the deep research models, you describe the research you are doing, and the model will ask you some clarifying questions, and then it does its magic. An example is better than me describing it at length here, so in another post, I have put the complete results of a deep research query with ChatGPT 4.5. You can see from the example that deep research does a few things–1) it outlines how your overall research might be structured, 2) it asks you clarifying questions about what limits you want on your research, and then 3) it produces impressive written output that resembles quite well what a human might produce. All in about 6 minutes. The question is, how useful is it?
Well, it is not ready for primetime, but it is better than what I have seen from previous ChatGPT and Gemini models. For undergraduate work, this would be impressive, and even would be good for a Master’s-level course paper. However, it certainly is not a Ph.D.-level of students’ work, and it is not publishable. You will notice the citations are limited, as it is only able to actually access academic articles that are online and not beyond a paywall. It also does not include all of the references for the works cited in the text. So, it basically serves as an overview of the research in the area. It is not comprehensive, but I find it helpful. I have already read 40+ papers about conversational AI’s benfits and limitations, and this is a decent, but incomplete overview. But, it is a vast improvement on what came before, and it is downright like magic if I go back to pre-2022 thinking.
What could make it better? One clear issue is that much of the academic research we access is beyond paywalls–almost all of the major TESOL and second language acquisition (SLA) journals are: TESOL Quarterly, Language Learning, Applied Linguistics, Language Teaching Research, System, etc. If the AI had access to those journals as well, then the output would immediately improve. I’m surprised the AI companies have not contacted Sci-hub about it.
When I asked ChatGPT about this issue, it replied:
I don’t have the ability to directly access or view full-text academic papers behind paywalls or subscription services. My responses, especially literature reviews, typically draw upon openly accessible resources or sources that provide detailed abstracts, publicly available summaries, or content that researchers have made accessible through open-access repositories.
Obviously, another issue with the output is that the writing style is what is it–it certainly is not my writing style, and I would think that anyone using this kind of output would be using it as notes rather than copying and pasting it (I mean anyone who cares about their work).
A final issue continues to be veracity and trustability of the output. The only way to know if what AI produces is reliable is to already have significant knowledge about the content. In this case, nothing in the content looks incorrect or invalid. It just looks incomplete–it is obvious to me that it is missing a lot of information due to not being able to access articles behind paywalls. So, my question again is, how would you use this?
My main use of AI has been to help me think–to help me plan writing or strategy for doing something. Basically as an interactive aid to complete some work. For that purpose, I find it to be fantastic. I also find it to be a great editor and proofreader–helpful critique and catching of typos. However, I do not feel it can do the writing for me for any important task, sor do I want it to. If I could just have AI create and write all these blog posts would I do it? No! I’m having too much fun here. Why would I have AI do something for me that I enjoy doing myself?