Mushtaq Bilal, PhD
Mushtaq Bilal, PhD

@MushtaqBilalPhD

23 Tweets 13 reads Oct 21, 2023
ChatGPT, Bard, or Bing?
Which one's best suited for brainstorming research questions and outlining research papers?
Here is a comparison using identical prompts:
A couple of points before we dive in:
a. I am using ChatGPT-4 with custom instructions.
b. I am using a prompting technique that I call "incremental prompting." We start with a simple prompt and gradually increase the level of difficulty.
1.1. Let's start with ChatGPT-4.
I asked ChatGPT if it knew Mikhail Bakhtin's idea of chronotope and I told it to limit its answer to a couple of lines.
It answered correctly and cited Bakhtin's (published) work. And it kept the answer limited to a couple lines.
1.2. Bard answered the question in a little more detail.
But it did not cite any sources and it didn't limit its reponse to a couple lines either.
1.3. Bing, like Bard, responded in detail and didn't limit its answer to a couple of lines.
It cited a few references but all of these were secondary sources. ChatGPT cited a primary source.
2.1. Next I asked ChatGPT about a subtopic "adventure chronotope" that Bakhtin develops in his book The Dialogic Imagination.
ChatGPT answered correctly, kept its answer limited to a few lines, and correctly cited a course.
2.2. Bard responded to the same prompt in detail and even cited relevant examples from published literature.
This is a very helpful feature for both teachers and students.
2.3. Bing's response was lengthier than ChatGPT but shorter than Bard.
It also cited references to published research papers.
Want to learn how to use AI-powered apps to supercharge your academic writing?
I have a complete tutorial for you.
It's being used by 3,700+ academics including those at Harvard, Princeton, Yale, and Stanford.
You can get it here:
efficientacademicwriter.carrd.co
3.1. Next I raise the stakes a bit and ask it a totally unrelated question about Benedict Anderson's idea of "homogeneous, empty time."
It answers correctly and cites Anderson's book.
3.2. Bard responds in detail and explains the concept quite clearly.
If I were teaching Anderson's book, I would ask my students to use Bard as a study buddy.
3.3. Bing responded correctly but its answer was less clear than Bard's.
And the sources it cited Reddit posts as references. The quality of references Bing cites is not up to the mark.
4.1. Next I raise the stakes a bit further.
I tell ChatGPT that I am working on a paper on how Hans Christian Andersen's fairy tales fuse Bakhtin's adventure chronotope and Benedict Andserson's homogeneous, empty time. I call it "chronotopic ambivalence."
ChatGPT says it's a compelling topic and will shed new light on Andersen's stories. Very encouraging.
4.2. Bard's response is also encouraging but unlike ChatGPT it elaborates on my idea of "chrontopic ambivalence."
Bard also gives me examples of stories that I could use to illustrate my point. This is very helpful and will save me a lot of time.
4.3. Bing, like ChatGPT, responded briefly. It also cited a few of Andersen's stories that I could use as examples.
But once again the sources it cited were not very impressive. It keeps going to sites like Wikipedia, which is helpful but you can't cite Wikipedia.
5.1. After discussing my ideas with ChatGPT, I asked it if it can give me ten research questions about chronotopic ambivalence in Hans Christian Andersen's stories.
Some of the questions ChatGPT gave me were very helpful. Answering these questions will be a great way to frame my paper.
5.2. Compared to ChatGPT, the quality of Bard's research question was not very impressive.
These questions could be useful for an undergrad paper but not a journal article.
5.3. Unlike ChatGPT and Bard that gave me ten research questions each, Bing gave me only five.
But the quality of Bing's questions were better than Bard and equal to that of ChatGPT's.
6.1. After brainstorming research questions, I asked ChatGPT to give me an outline for a research paper.
It assumed I am writing a paper for a natural sciences journal and not a literary studies journal. I told ChatGPT that the outline was not useful and it revised the outline, which was much better.
It looks like ChatGPT understands the structure of scientific articles better than ones in the humanities.
6.2. Compared to ChatGPT, Bard's outline was mostly useless. It created an outline for a short essay and not a research article.
6.3. Bing's outline was better than Bard but not as detailed as ChatGPT.
Final word:
My suggestion would be to try out all these models and see which one works best for which particular stage of the writing process.
Once you figure that out, learn to do incremental prompting for best results.
Found this comparison helpful?
1. Scroll to the top and hit the Like button on the first tweet.
2. Follow me for more threads how to use AI apps to supercharge your academic writing.

Loading suggestions...