Thursday, January 19, 2023

Can ChatGPT be used for teaching critical thinking and informal logical fallacies?

It's been a while since I last wrote a blog post, but with all the recent media attention (and my own interest), I thought it would be interesting to share some of my results in using ChatGPT. Specifically, using it to replicate the things that this blog, a book and podcast did regularly, back when blogs were big and podcasts were small.


What is ChatGPT and how could it assist in teaching critical thinking and analysing arguments?

Unless you've been a participant in the series 'Alone' since December last year, you've probably heard about ChatGPT, the cutting-edge language model developed by OpenAI that utilises deep learning techniques to generate human-like text. The recently media attention has highlighted its potential to revolutionise various industries and applications, including education. While there are plenty of potential challenges of AI in education, an interesting application of this technology is its potential to help teach critical thinking, informal logical fallacies, and how to analysise/improve arguments.

ChatGPT can be used to generate realistic examples of common logical fallacies and explain where the argument goes wrong. This can help students develop the skills necessary to recognize and avoid these fallacies in their own arguments and in the arguments of others. For example, it can generate a statement like "Vaccines cause autism" which is (sadly) a common logical fallacy called False Cause

Additionally, ChatGPT can be used to analyse and evaluate arguments by generating counterarguments and identifying any weak points in the reasoning. For example, it can be used to evaluate an argument in favor of increasing the minimum wage, by generating counterarguments such as "increase in minimum wage would lead to inflation" or "small businesses would suffer" and help students identify any weak points in the reasoning.

You can also use ChatGPT to facilitate discussions and debates by generating thought-provoking questions and prompts. For example, it can generate a question like "Should the government regulate social media to combat misinformation?" this can help students to consider different perspectives and to develop their own critical thinking skills.

Overall, ChatGPT has the potential to be a valuable tool for teaching critical thinking, informal logical fallacies and to analyse people's arguments. Despite concerns about the potential challenges of AI in education, the technology holds a lot of potential in various fields. As the technology continues to improve, it will likely become an even more powerful tool for educators and students alike.

Hopefully anyone who has played around with ChatGPT has reaslised by now that most the above is a copy and paste from ChatGPT. It's too well written to be me (and it's kind of boring and lacks my use of irony or references to TV shows). 

See below for my initial few prompts and its response.


Using ChatGPT to generate an explanation for concept, such as an informal logical fallacy.

My next step was to get it to explain a fallacy.


As Larry David would say, pretty, pretty, good. 


Using ChatGPT to analyse an argument.

I then decided to see how it would go assisting me in analysing text for fallacies, which is pretty much the entirety of this blog. So I tested ChatGPT against some text from this post: No conflict here, just scientists doing their jobs - you know - science


Okay, I told it to look for a particular fallacy. So how about just looking for them in general?

And then again for another bit of text from the blog post:

It's certainly not perfect, and I've found other examples when ChatGPT is completely wrong, but as an assistant and teaching tool it already has a lot of potential. Just as with Wikipedia for doing research, a good starting point for a topic or constructing (or destructing) an argument, but not an endpoint.  
We said in the introduction to Humbug!, that the book: 
...is more a tool to consult as the occasion demands, rather than a book to read in a linear fashion. You may find it to be a useful resource for those occasions when you read or hear a suspect statement or claim, and want to identify the flawed reasoning in the assertion.
If I ever do a 3rd Edition I'd probably update this point to suggest running the suspect statement or claim through ChatGPT or an equivalent AI, and then checking its output against an authoritative source. Speaking of which (and as an aside) I just realised that according to Betteridge's law and my headline, all of the above is wrong.

Monday, August 24, 2020

Valuing scientific expertise

This is the second of two posts that go hand-in-hand. The first post is a brief overview of some important concepts and well established results of psychology that help me think about my thinking. This second post is about valuing scientific expertise and why our non-expert default position should be to believe scientific consensuses, or at the very least, to be comfortable in saying, "I don't know".  

You are not an expert (and neither am I) 

As Socrates is said to have said, wisdom is knowing that you don't know. A big part of critical thinking is intellectual humility. While it is possible to become an expert at something, it takes years and years of training in a discipline. Think about something that for you this applies to. Think about all the time, hard work, practice and dedication that it took to earn that expertise. Think about all the mistakes you learned from and the pitfalls you now know how to avoid, On that one thing you're probably pretty safe talking at length about all that you know. And when you meet other experts, they can easily recognise you know what you're doing or talking about. 

Now think about non-experts and novices trying to talk to you about it. How much credit do you give their views? Well, that's you (and me) when it comes to every other field of human endeavour. It's the ultimate form of arrogant overconfidence to assume your opinion on a subject that you don’t have expertise in has much, if any, merit. And when you dismiss expert consensus out-of-hand, by definition, your dismissal of something you know nothing about is worthless. In fact, it says everything about you and nothing about the experts or their view(s). Imagine a non-expert novice dismissing your views out of hand when discussing a subject you are an expert in? 

Scientific (and expert) consensus

A scientific consensus is more than scientists within a field being surveyed for their beliefs or tallying their views in research papers. It relates to what the researchers in a field are interested in studying and why. When there is a question to be answered or a problem to be solved, they'll be researching it and arguing about it; i.e. doing science. But when a question is answered to sufficient satisfaction, e.g. anthropogenic climate change, evolution by natural selection or the second law of thermodynamics, there is little or no incentive to keep researching this question — it's done. Scientists operate within well-established paradigms for a reason — they're well established based on the convergence of theory with data.

Friday, August 21, 2020

Thinking about thinking

This is the first of a planned two posts that I've been mulling over and sporadically writing notes to myself about over the last year or so. This first post is a very brief overview of some important concepts and well established results of psychology that help me think about my thinking. The second post is about valuing scientific expertise and why our non-expert default position should be to believe scientific consensuses, or at the very least, to be comfortable in saying, "I don't know". Both include a lot of links to further reading and learning if you're so inclined.  

Why?

Since the ascendancy of Donald Trump to the White House, and now with the coronavirus pandemic, I've noticed a lot of weird beliefs and thinking take hold across social media. Common to these groups is the idea that "the media" and other non-specific all-powerful groups or people (the "deep state", Bill Gates) are manipulating us with their "fake news", false narratives and sinister motives; that scientists and so-called "experts" can't be trusted. Unlike me, this new group of free and critical thinking peoples see the truth; the scales have fallen from their eyes (not that they are being manipulated by an authoritarian dictator and a disinformation campaign). I, on the other hand, as someone who tends to take experts and journalists who have a proven track record at face value, am one of the “sheeple”.


Arguing with people who have fallen down the YouTube and Facebook rabbit hole about their specific beliefs and claims is exhausting and likely to backfire. As such, I thought I'd take a different tack and discuss how I think about critical thinking vs non-critical and pseudo-critical thinking, and the importance of valuing expertise. 

If I have sent you a link to this post, it's because I know I won't convince you about whatever it is we're disagreeing about. I'm not even going to try. Forget about whatever it was we were discussing as it’s not specific to what follows. What I hope to do is to show how I think about any claim, how I try to reflect on my own thinking (I'm not always successful), and in another post, why I think we should all value and respect expertise. 

Critical thinking vs non-critical and pseudo-critical thinking

First, I think it's important to distinguish between the three different modes of thinking that to some degree we all use to come to a point of view — critical thinking, non-critical thinking (see Kahneman's slow and fast thinking) and pseudo-critical thinking.

Saturday, November 23, 2019

Conservative bias about 'leftist bias'

The ABC recently published an article Teachers, schools in firing line as conservatives rail against 'leftist agenda',  based on their Australia Talks National Survey. The crux of the piece is that: 'One Nation voters are turning on the mainstream education system as conservatives across the country express a deep mistrust of what they say is a "leftist agenda" taking over the classroom.'

There's nothing wrong with the story per se, but the headline and associated graphs they use send a misleading message. They show the satisfaction of voters of different parties with the education system and with teachers.




However, they are out of 100% for each party, thus greatly 'biasing' the results for parties that have a small number of voters. Consider the sizes of the orange columns, which belong to One Nation, who nationally polled 1.29% at the last federal election.

When you adjust the graphs based on the national first preferences, making them out of 100% in total, this is the result.



There are certainly lessons to be learned in terms of why different groups responded to the survey in this way, but it should be kept in perspective. This isn’t about ‘conservatives’, given the blue LNP would fall under that category. It’s about a small far right party that already gets quite a bit of media attention, especially when compared to the proportion of the population who throw votes their way. Further, as the article goes on to say:"[t]he root of the frustration can be traced to a wider dissatisfaction with the political landscape...", rather than the education system and teachers themselves. 

Monday, September 17, 2018

Education 'Cargo Cults'

Education guru (not self-proclaimed) John Hattie, along with co-author Arran Hamilton, list a number of fatal cognitive biases in the blog post How to stop cognitive biases from undermining your impact, which is well worth a read for any educator (or anyone for that matter). They point out that:
...a growing database of Cognitive Biases or glitches in our human operating system have been catalogued and confirmed through laboratory experiment and psychometric testing. The research suggests that biases afflict all of us, unless we have been trained to ward against them.
The post is a small part of the bigger white paper: Education cargo cults must die (pdf). It's a somewhat ironic paper, given the criticism levelled at Hattie's guru status and cult-like following. However, this is something Hattie and Hamilton are well aware of, noting the danger of appealing to authority:
Authority Bias: Tendency to attribute greater weight and accuracy to the opinions of an authority figure—irrespective of whether this is deserved—and to be influenced by it. 
EDUCATION: Don’t be swayed by famous titled gurus. Carefully unpick and test of all their assumptions—especially if they are making claims outside the specific area of expertise. Be particularly suspicious of anyone that writes and publishes a white paper [!!!] (p 20, emphasis added).
Some key quotes from the white paper: 
We make the case that ingrained cognitive biases make us all naturally predisposed to invest in educational products and approaches that conform with our existing worldview and to only grudgingly alter our behavior in the face of significant conflicting evidence. In section two, we argue that educators and policymakers must fight hard to overcome their cognitive biases and to become true evaluators of their own impact (p 9). 
Health warning: May ultimately make you feel as though you can trust what you see again, because you’ll have a framework for identifying evidence and being more skeptical of initiatives and resources that just don’t have sufficient backing (p 10).
We advocate an approach to education that is built on reason, rather than intuition alone. This involves systematic collection of data on students’ learning experiences in the classroom and the ways in which teachers and product developers can accelerate this learning. From data, we can inform intuitions and judgements and build theories. And, from theories, we can build structured processes—continually testing and refining these too (p 25).
While there are plenty of critics of Hattie's work (and how Hattie's work tends to be used), it's great to see educators engaged in discussion about our profession and continually improving evidence/research base of our practice. Or at the very least, minimising the chances of wasting our time on myths and errant nonsense.