A Suggestion Concerning A Technology: Maybe They Should Call it "GST" (George Santos Transformer) Rather Than GPT
A Suggestion Concerning a Technology:
Maybe They Should Call it GST (“George Santos Transformer") Rather Than GPT
By Don Curren
Probably the easiest way to get into
this piece would be ask ChatGPT to write it, and then quote from what it
produced and explore what was wrong and right about it.
But ChatGPT has become such a
phenomenon, so quickly, that that approach has already become a cliché.
The flood of pieces that use ChatGPT
to explore itself is a potent demonstration of how powerful and fascinating it
is, and how quickly it’s ensconced itself in popular culture and discourse.
For me, and I suspect many others, the
appearance of ChatGPT and other so-called Large Language Models (LLMs) in public
discourse in the last couple of months seems like a turning point of some kind
in the evolution of Artificial Intelligence.
The emergence of AI that can generate
images based on text prompts, such as OpenAI’s DALL-E, in 2022 was like a shot
across the bow.
They were impressive.
But generating natural-sounding
language in response to prompts in a conversational style is more impressive, and
uncannily so.
There’s something about words strung
together in a coherent and persuasive fashion that is powerfully suggestive of
some kind of consciousness.
You can be completely aware, at one
level, that all it’s doing is cleverly mimicking patterns of verbiage in the
material it’s been trained on.
But at the same time, you can get
sucked into an intimate conversation with it and fall prey to a well-nigh
irresistible impulse to see some kind of sentience there.
A Flagrant Tendency Toward Fabrication
Beyond the illusions of subjectivity,
however, ChatGPT, the Bing chatbot and their peers are dangerous in part
because of their extraordinary fluency, plausibility, and ease of use.
They have something that’s all too
common these days: the illusion of authoritativeness combined with a flagrant tendency
toward uncontrolled fabrication. When a LLM fabricates, it’s apparently
referred to by computer engineers as “hallucination.”
My own experiments with ChatGPT4, the most
recent iteration of ChatGPT, have been modest, but I recently asked it to give
an account of my career as a journalist.
There were a few truths interspersed with
several astonishing inaccuracies.
It read like I had taken the barebones
facts about my working life – the employers I had worked for, what beats I
covered for them, etc. – and given them to George Santos for his “creative”
input.
I’m loath to repeat any of its fabrications, but at one point it had me in the position of European bureau
chief for a well-regarded newspaper that never actually hired me, despite my
occasional efforts to get it to do so.
It occurred to me while reading it
that working reporters represent a subset of language-oriented, white-collar workers
that ChatGPT will never be able to replace.
At some point reporters – as opposed
to columnists, pundits, and opinionizers of all stripes – actually have to engage
with reality beyond some kind of text.
They may do a lot of cutting and
pasting in a manner similar to ChatGPT. But eventually, some degree of
verification, of interacting with phenomena other than prefabricated pieces of text,
must be done.
That interaction with something
outside of pre-existing text, that effort to engage with objective reality, is beyond
the capabilities of LLMs. They represent a kind of entity for which there
really is nothing “outside the text.”
Despite their extraordinary speed and
fluency, these programs aren’t engaging with the world and thinking about it
the way a human being does.
“Ineradicable
Defects”
“We know from the science of linguistics and the philosophy of
knowledge that they differ profoundly from how humans reason and use language.
These differences place significant limitations on what these programs can do,
encoding them with ineradicable defects,” Noam Chomsky, Ian Roberts and Jeffrey Watumull wrote in an op-ed
piece in the New York Times.
“ChatGPT
and its brethren are constitutionally unable to balance creativity with constraint,”
they wrote. “They either overgenerate (producing both truths and falsehoods,
endorsing ethical and unethical decisions alike) or undergenerate (exhibiting
noncommitment to any decisions and indifference to consequences).”
Do just a little experimenting with
ChatGPT and you’ll likely find yourself inundated with untruths, all of them
presented in lucid, grammatical and seemingly authoritative prose.
This alone would seem to present an
argument for proceeding cautiously with the release of ChatGPT and other LLMs
into the wilds of public discourse.
Which, of course, is precisely the
opposite of what happened.
The computer scientists, developers,
coders, etc. working on these programs are working at the cutting edge, exploring
unprecedented technologies with undreamt-of consequences.
One would expect a degree of
circumspection about releasing these technologies to the public.
But commercial considerations seem to
have driven the whole process.
It’s an interesting instance of the tendency to
give the tech industry a “free pass” when it comes to developing and disseminating
new technologies.
We’re willing to give Silicon Valley – and all the
other Silicon Places – much broader leeway than we do to other industries,
which are more extensively regulated and required to bear some responsibility
for the problems associated with their technologies. Think, for instance, of
the automotive industry.
“Why is
it that the Tech Industry Gets a Pass?”
If your new car is deficient in some important
respect, you are notified by letter and provided with an opportunity to have
the defect rectified by the automaker in question.
During arguments in the Gonzalez v. Google case at
the US Supreme Court in February, Justice Elena Kagan observed that “every
other industry has to internalize the costs of misconduct. Why is it that the
tech industry gets a pass? A little bit unclear.”
She then (sort of) answers herself.
“On the other hand, I mean, we're a court. We
really don't know about these things. You know, these are not, like, the nine
greatest experts on the internet,” Kagan says.
This is a very instructive exchange. It
raises the issue of the tech sector’s “free pass," but also
reflects one of the reasons it keeps getting that free pass.
As soon as she raises the issue, Justice
Kagan retreats, suggesting the Supreme Court justices don’t have the requisite
technical expertise to address it.
There’s an obvious objection: if the
members of the US Supreme Court aren’t in a position to determine how the tech
industry should act responsibly with regard to its products in the US, then who
is?
But setting that aside, Kagan’s remark
reflects how intimidated most of us are in the face of the unstoppable
juggernaut of high-tech innovation. That feeling of intimidation is one of the
key factors that prevents us from seriously questioning the onward rush of technology.
Who are we, we ask ourselves, to stand
in judgement of the wonderful machines created by the masters of Palo Alto? Even
Supreme Court justices apparently ask themselves that question, and then step
to the sidelines.
“The Myth of Tech Exceptionalism”
The online magazine NOEMA published a piece called “The Myth of
Tech Exceptionalism” in February of 2022 that explores the dilemma insightfully:
An entire
generation of “innovators” has grown up believing that technology is the key to
making the world better, that founders’ visions for how to do so are
unquestionably true and that government intervention will only stymie this
engine of growth and prosperity, or even worse, their aspirational future
innovations.
Not only
do the “innovators” believe this, they have managed to convince a lot of other
people – including key policymakers – that it’s true, and that no one, not even
Supreme Court Justices or democratically elected legislators, should seek to
interfere with the process of innovation.
The Noema
piece, authored by Yael Eisenstadt and Nils Gilman, offers
a criticism of this tech-industry stance, pointing out that we don’t apply the
same logic to other industries.
“(The)
delights of plastics do not give the chemical industry the right to poison our
rivers and skies, so should the conveniences produced by tech not indemnify
companies from accountability for present-day harms imposed on the public,”
Gilman and Eisenstadt write.
We didn’t
always take that view of the plastics industry or any other the other
industries that have fallen under the purview of government regulation.
Pesticides
such as DDT were routinely used without restrictions prior to Rachel Carson’s
epochal book on pollution, Silent Spring, which helped disseminate awareness of
the environmental damage they caused and foment widespread opposition to their
use.
The
analogy might not seem warranted. There seems to be a big difference between a
technology like pesticides that damages the health of human beings and other
organisms and one that simply re-arranges words and other forms of information.
But, as Marshall McLuhan observed, our media technologies create our psychic and social environments. They can have the same effect on our psychological and social functioning as the changes in our physical environment have on our physical health.
Not a Contrarian View Any More
Not being
an AI myself (at least not to the best of my knowledge), it’s taken me a while to
put together this piece.
When I
started it a few weeks ago, I felt like was arguing a somewhat contrarian
position – that some oversight and regulation of these rapidly evolving
technologies should, at least, be considered.
But the
GPT and AI landscape has evolved so rapidly that what once seemed like an
outlier position has become decidedly more mainstream.
More mainstream with each passing day.
In late
March, the Future of Life Institute, a non-profit with the objective of mitigating the risks posed by
transformative technology, published an open letter asking AI labs to pause
training of systems "more powerful than GPT-4."
The list of those signing the letter include Apple Co-Founder Steve Wozniak and Elon Musk.
Last week, Geoffrey Hinton, a professor at the University of Toronto and artificial intelligence pioneer who laid the foundations for much of the current developments in AI, quit his job at Google, saying he wanted to be able to speak freely about the risks of A.I.
A part of him now regrets his life’s work, Hinton told the New
York Times.
The Times reported that Hinton’s
immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able
to know what is true anymore.”
“It is hard to see how you can
prevent the bad actors from using it for bad things,” Hinton said.
On Thursday of last week US Vice President Kamala Harris met with CEOs
from top companies developing AI products and tools.
Although no specific measures in relation to LLMs have emerged from the
Biden Administration, it did express its worries about AI in a statement.
“AI is one of the most powerful technologies of our time, but in order
to seize the opportunities it presents, we must first mitigate its risks,” the
statement said.
George Santos has to date remained unsanctioned for his prolific public
lying. He remains in the House of Representatives as the
member for New York's 3rd congressional district despite
numerous false or dubious claims about his biography, work history, criminal
record, financial status, ethnicity, religion, and other matters, as Wikipedia
puts it
If we allow subsequent iterations of ChatGPT and its peers out of the
laboratory and into the wild without some form of public scrutiny, the parallel
between those LLMs and the mendacious congressman will become closer.
With the advent of the LLMs, we seem to have reaching a turning point. With these technologies, the drive toward Artificial Intelligence has created products that demand social intervention and oversight of some kind.
ChatGPT’s
Version of My Conclusion
Before finishing this piece, I did another experiment with ChatGPT4. I
fed it the first three quarters or so of this piece and asked it to write a
conclusion in a similar style.
What it came up with was, well, rather weak.
It was the kind of summary ending you might find stuck on a high-school
or undergraduate essay, a bald recapitulation of some key points from the piece
that didn’t add anything new or bring any new eloquence to the argument.
Recognizing that might reflect badly on me rather than on ChatGPT, I
sought to adjust for that by resubmitting the request but asking it to provide
the conclusion in different language.
The result was better, although still not exactly George Orwell.
Here are the last
few paragraphs:
It is crucial that the deployment of these models
into public discourse is done with great caution, given the potential risks
they pose. Unfortunately, commercial interests seem to be driving the
widespread adoption of these technologies, giving the tech industry far more
leeway than other industries. In light of these concerns, it is essential that
we approach the integration of language models into our society with critical
and discerning eyes.
And in this instance, at least, I believe it’s
telling the truth.
AI also gets marks off for overuse of passive voice.
ReplyDeleteYes, the passive voice is definitely used by AI a lot.
ReplyDelete