chatGPT

Let's start with some everyday questions

  • What is it?
  • Who uses it?
  • Where does it come from?
  • How does it work?
  • Why might we like it/avoid it?

 

What is it?

Wikipaedia:

ChatGPT is a chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and has been fine-tuned using both supervised and reinforcement learning techniques. Wikipedia

OpenAI:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

Let's listen to a BBC explanation:

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjByc2By5H9AhVkcGwGHV6LBpoQwqsBegQIKhAF&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DBWCCPy7Rg-s&usg=AOvVaw2YeYB8JUKhTOPSyOwwwtJb

(Watch to 3.19 minutes)

Is it really an amazing new thing?

(Continue watching to 4.15 mins)

Does it change everything?

(Watch to 5.34 mins)

Who can use it? - and for what

Ethical Issues

(Watch to 6.50 mins)

Regularion of content

(Watch to 7.52 mins)

Where does it come from?

It is simply an advance in the algorithm work that has been going on under the banner of "AI" for 50 years. The computer is told to do certain things like add a word as the next word that usually follows the last word when sentences contain 'rabbits'.

OpenAI is one of the very many open source entities that develop work that progresses the Web. It is interesing that Mirosoft have bought the most recent versin and for sure they will have paid to control what the existing developer do - that is how they gain control of the technology. 

How does it work?

There are two main parts: one has to decide what to say and the other how to say it. Both are a series of algorithms (rules) the stupid computer has to follow.

Why might we like it/avoid it?

The question seems to be what do we want to do and will it help us? Many people seem preoccupied with the question of will it replace our jobs and even us. This does not seem to be a real issue. What follow are some discussion points that might help clarif our thinking.

Some quotes from Link Experts discussing chatGPT

Link is at https://mailman.anu.edu.au/mailman/listinfo/link

Software engineer Mazen Kourouche, creative director at Litmus Digital, is already using the AI to replicate large parts of his work. ChatGPT excels at writing the algorithms that sit at the functional heart of good code. It is very good at building websites. And it can quickly parse Kourouche’s code and spot errors.

But like everyone The Age and the Herald spoke to for this story, Kourouche is not worried about it taking his job. He thinks it will simply make him more productive.

“AI is not going to replace our jobs,” he says. “But someone who knows how to use it will.”
——————————
Stephen Loosley
ChatGPT is the public face of a breakthrough in AI models. Deep learning, a technique that allows AIs to learn from large amounts of data, has been around for decades but really took off in 2010 as computers powerful enough to run them efficiently started to arrive.

The field accelerated exponentially in 2017 with the development of transformer models like ChatGPT – an approach that allows AIs to understand the relationship between words.

By running these models on supercomputers – Microsoft built one specifically for ChatGPT – and training them on almost 600GB of data from books, Wikipedia and the wider internet, these AIs are able to convincingly generate human-like text. Others like DALL-E can generate whole artworks from text prompts.

The applications are obvious: report writing. Internet copywriting. Graphic design. Software engineering. Social media marketing. Customer service.

ChatGPT itself said it expects “jobs that involve repetitive tasks, data entry and simple decision making are most likely to be replaced … customer service representatives, telemarketers, and data entry clerks”.

The models do have limitations. Centrally: they don’t actually understand what they are talking about.

Talking to a medical AI is like talking to an actor playing a doctor. They can confidently answer a wide range of medical questions, says Professor Nicholas Davis, co-director of the Human Technology Institute, but ultimately “you’re still talking to someone who is impersonating a doctor”.

That means they lack the “human element of cadence, voice of the customer, flow, idiom, even human empathy” that remains central to many creative pursuits, says Sydney-based SEO copywriter Rachel Green.
————————————————————
Tom Worthington
ChatGPT, and other AI will take away some jobs, and change many. The problem is that AI do more of the simple tasks, tending to eliminate entry level jobs. So where does someone gain experience to get one of the remaining advanced jobs?
————————
Tony Barry
I'd been ignoring ChatGTP as I had the impression that it was just
something being over-hyped. I've been using it actively for a week and I'm
very impressed. This technology is going to be extremely disruptive. As
many of you may know I'm a librarian and have worked as a research officer
in a variety of contexts mainly at the Australian Parliament. Doing the
same work now it would be a toss up whether my job would have been
eliminated or, and I think more likely, I would have become an order of
magnitude more productive. To use this tool, the key is not in the answers
it will find but in the questions that get asked and the follow up "chat"
with the system to refine, elaborate and eliminate errors. I haven't used
it long enough to get a feel of how well you can get it to document what
its sources are for what it writes which is what I'll look at next. It is
foolish to use it as an oracle without knowing what the sources are.
————————————
Tom Worthington
Judging by the evidence given to the RoboDebt inquiry, AI would do a
better job of explaining its decisions than humans. AI would say: "The government wanted to get the support of rich people, by persecuting poor people, so that is what we did."

AI could be used to patiently explain the reasons for a decision. Of course the client should be able to appeal to a human, but just explaining why a decision was made would help in a lot of cases.
——————————————
David Lochrin
Human affairs need to be managed by humans, not machines.

No computer so far has achieved anything remotely approaching sentient consciousness (whatever Elon Musk may have said about those in his driverless cars :-), let alone human experience of the world. Our understanding is the product of our brains & bodies evolution on Earth over at least the last one millions years, as encoded into the brain of every new-born child.

It's this on which the ethical foundations of human civilisation are fundamentally built.

So whether we likes it or not, we're stuck with the current mess and we need to figure it out ourselves. Appeals to the mystique of "AI" are beginning to sound to me like appeals to God, asuming she/he/it exists, to please rescue us!!
————————
Stephen Loosley 2/2/2023
- he talks about the deal Microsoft did with OpenAI….
——————————
See ABC RN's Download This Show: "Is ChatGPT worth the hype?" Broadcast 26 Jan 2023 https://www.abc.net.au/radionational/programs/downloadthisshow/chat-gpt-and-the-new-age-of-ai/14136048
——————————————
David Lochrin
Neural networks have been around since the late 1960's so "AI" isn't anything new. However I suspect most people's conception of "AI" is still so vague as to be quite misleading, and when this extends to those in the legislature and judiciary we have a problem. The very expression "artificial intelligence" is unfortunate because it suggests a computer which works rather like the human brain, capable of human judgements and emotions. This probably isn't helped by a statement attributed to Elon Musk I noticed somewhere recently to the effect that he couldn't be sure the computers in Tesla cars were not sentient to some degree.
-------------------------
Bernard Robertson-Dunn
I just had a conversation with ChatGPT

Me: what is a Bravais pendulum

ChatGPT: A Bravais pendulum is a simple pendulum with a single point of suspension and a spherical bob, named after French physicist Auguste Bravais. It is used to determine the local value of g, the acceleration due to gravity, by measuring the time period of oscillation.

Me: no it isn't

ChatGPT: I apologize, it seems my previous answer was incorrect. Can you please provide more information or correct my understanding of a Bravais pendulum?

Me: a Bravais pendulum is a pendulum that swings in an ellipse not a straight line

ChatGPT :Thank you for correcting me. Yes, you are correct. A Bravais pendulum is a type of pendulum that swings in an elliptical path, not a straight line. This is due to the combination of the pendulum's weight and the effects of Earth's rotation. The elliptical path of the pendulum is used to study the laws of planetary motion.

Me: why does it gain time when it rotates one way and loses when it rotates the other way

ChatGPT: The behavior of a Bravais pendulum can be understood by considering the effects of Earth's rotation on the pendulum's motion. As the pendulum swings, it experiences a Coriolis force that acts perpendicular to both its velocity and the axis of rotation of the Earth. This force causes the pendulum's motion to deviate from a straight path and follow an elliptical path instead.

ChatGPT: When the pendulum swings in one direction, the Coriolis force causes it to move in a direction that is ahead of its original path, effectively "gaining time." When it swings in the other direction, the Coriolis force causes it to move in a direction that is behind its original path, effectively "losing time." This results in a variation of the period of oscillation with the orientation of the pendulum, and is an example of how the Earth's rotation affects the motion of objects.

Comments:

Its first response was a direct statement with no indication of uncertainty . But wrong.

When I challenged it, it told me I was correct, repeated what I had said and added additional information

When I asked a further question - which is actually what makes Bravais pendulum so interesting - it gave me even more additional information, but that information is wrong.

"This force causes the pendulum's motion to deviate from a straight path and follow an elliptical path instead" By definition a Bravais pendulum starts with an elliptical path and the force only modifies the period.

Why didn't it start with all the information it eventually gave me?

And how would I know if it ever gets it right? It happens that I know about a Bravais pendulum, but others might not be in the same position.

Apart from responding in smooth English, it is about as trustworthy as Wikipedia or Google. They both provide a starting point but there is a need to check. At least Google gives you links to many alternative routes to a variety of answers and Wikipedia has references you can follow up.

Unfortunately, you seem to have to know something about the question and its possible/probable answer to work out if it is telling you useful information.

To me it looks rather like GPS navigation systems. Don't put blind trust in either.
——————————————
Karl Auer
I wonder what would have happened if you had responded that a Bravais
pendulum is designed to detect the presence of water in the vicinity of
gullible people? Or to detect the locus of health issues in unwell
people? Or the presence of viable marriage partners? Or pregnancy?

Are your responses guiding it to other/more sources, or are they
sources themselves?
—————————————————
David Lochrin
It seems its' most significant component (and probably OpenAI's major I.Property investment) is a novel natural-language model and implementation. Presumably a front-end component parses a query in an attempt to locate its linguistic elements and uses that to search the 'net, and then a back-end converts the results to natural language again for presentation to the enquirer..

Much of the ChatGPT's charm lies in this language model. People are impressed by the "intelligence" of an apparently natural, human-like reply, and sounding like the HAL-9000 wouldn't do any harm either.

The language model may well be based on some form of existential calculus (which is distinct from the predicate calculus) - see "Beginning Logic" by E.J. Lemmon (Nelson) and "Word & Object" by WVO Quine if you're a glutton for punishment.
———————————