Could AI become smarter than humans? | Terms of Service — Transcript

Discussion on the possibility, definition, and implications of AGI, featuring AI researcher Nick Frost and exploring current AI capabilities and future prospects.

Key Takeaways

  • AGI is not yet achieved and current AI systems are specialized rather than general intelligence.
  • The definition of AGI involves treating a computer as a person, which is not the case with today's AI.
  • There is skepticism about AGI arriving in the near future, requiring breakthroughs beyond current technologies.
  • The hype around AGI serves both human curiosity and venture capital interests.
  • Artificial superintelligence is a more speculative concept and even further from current capabilities.

Summary

  • The video explores the concept of Artificial General Intelligence (AGI) and whether it could surpass human intelligence.
  • Nick Frost, AI researcher and co-founder of Cohere, shares his perspective on AGI and its realistic timeline.
  • AGI is defined as a computer system treated as a person, capable of independent thought and action, which current AI does not yet achieve.
  • The discussion highlights the human fascination with creating intelligence in our own image and the motivations behind AI development.
  • Frost contrasts AGI with artificial superintelligence, describing the latter as a system treated like a god, both terms being loosely defined.
  • The video touches on the hype and narratives in Silicon Valley around AGI, including its appeal to venture capitalists.
  • Historical context is given about neural networks and breakthroughs like AlexNet that revived AI interest.
  • Frost expresses skepticism about AGI arriving soon, emphasizing the need for multiple independent inventions beyond current transformer models.
  • The conversation includes reflections on how people currently interact with AI systems and their limitations compared to human-like intelligence.
  • The video aims to clarify misconceptions and provide a grounded understanding of AGI's potential and challenges.

Full Transcript — Download SRT & Markdown

00:00
Speaker A
It has been several, maybe two years now, for every time I enter into a room and give a talk like this, I'll ask people how many of you think AGI is coming really soon, and pretty much nobody does.
00:09
Speaker B
Wait, I would like to do that here.
00:11
Speaker A
Yeah, let's do that here.
00:11
Speaker B
Yeah.
00:12
Speaker A
Yeah, the question is, how many of you think we'll get AGI in the next few years, being three?
00:18
Speaker B
Zero hands.
00:19
Speaker A
Zero hands.
00:20
Speaker B
Zero hands.
00:20
Speaker A
So nobody.
00:21
Speaker A
Nobody.
00:22
Speaker A
I think Sam Altman only hangs out in rooms where the answer to that question is yes by everybody.
00:29
Speaker B
Thank you.
00:30
Speaker B
Thank you so much for being here, it is so great to be here at Ludlow House with the wonderful On Air Presents team and our guest Nick Frost.
00:40
Speaker B
And also it's so nice to have a live audience.
00:41
Speaker B
My name is Clare Duffy and on this show Terms of Service, we talk about the latest advancements in technology and we try to help people understand how that tech is going to impact their lives right now, so that we can all be smarter, more intentional tech users.
00:55
Speaker B
Today we're going to be discussing one of the spiciest, most talked about, but most controversial topics surrounding artificial intelligence, and that is artificial general intelligence.
01:05
Speaker B
This is a not very well-defined term, but generally refers to some theoretical future AI system that could be as smart as, if not smarter than humans.
01:54
Speaker B
And the question in my mind is, can we actually build this technology, should we be trying to in the first place, and what could it mean for all of us if we do?
02:03
Speaker B
So we have tonight Nick Frost, thank you so much for being here, he is an AI researcher and the co-founder of the AI startup Cohere, which last year reached a nearly $7 billion valuation.
02:15
Speaker B
And you have had this sort of interesting take about AGI, there's many people in Silicon Valley including OpenAI CEO Sam Altman, who have said that AGI is potentially around the corner, you've said those comments are a disservice to the industry.
02:31
Speaker B
So I'm really excited to get into this with you and with all of you.
02:35
Speaker B
Thank you for being here, Nick.
02:36
Speaker A
Thanks for having me.
02:37
Speaker B
So, just to back up first.
02:40
Speaker B
You studied computer science at the University of Toronto, you worked under Jeffrey Hinton, who is now well known as one of the godfathers of AI, you were his first hire at the Google Brain Lab in Toronto.
03:10
Speaker B
Why did you decide to get into AI?
03:12
Speaker A
I think I got into it for probably the same reason most people got into it.
03:20
Speaker A
Especially around 10, when I got into it in 20, wow, 2010, 2011.
03:26
Speaker A
Um, which is that it just seems kind of intrinsically interesting.
03:28
Speaker A
Like there's something very human about wanting to create more intelligence.
03:33
Speaker A
Um, I was reading there was like, I was reading this Tolkien essay recently.
03:36
Speaker A
Um, and there's a line in it where he says it's, it's intrinsically human to want to talk to trees.
03:40
Speaker B
Hmm.
03:41
Speaker A
Which I think is, which really resonated.
03:42
Speaker A
I was like, yeah.
03:43
Speaker B
Same.
03:43
Speaker A
Yeah, there's something really human about wanting to build stuff in our image and make the, and make something to speak to.
03:48
Speaker A
So I think a lot of people got interested in AI, um, just because it was cool to push the limits of what computers can do and envision what they might be able to do in the future.
03:53
Speaker A
Interestingly, um, Jeff Hinton, um, who I, yeah, who I credit with like kind of teaching me everything I know about machine learning.
04:00
Speaker A
Um, really didn't get into AI for that reason.
04:03
Speaker A
He really got into AI, I mean, he kind of invented neural nets, um, for the purpose of understanding how the human brain works.
04:09
Speaker B
And so when you started working with Jeff at the University of Toronto and then Google Brain.
04:14
Speaker B
What did you think that you were building at the time?
04:19
Speaker B
And did you have a sense that AI was going to be as big a deal as it is today?
04:23
Speaker A
No.
04:24
Speaker A
I remember back in 2013 or 2012, when I first started doing more research in machine learning.
04:30
Speaker A
This is a few years after neural nets had for the first time shown that they were good practically.
04:37
Speaker A
There was a thing called AlexNet, which was an image classifier.
04:44
Speaker A
So it could take in a picture and tell you what was in the picture out of like one of a thousand different classes.
04:50
Speaker A
And it was the, it was invented at University of Toronto under Jeff and Alex Krizhevsky.
04:53
Speaker B
Which seems kind of pedestrian now, but at the time that was huge.
04:56
Speaker A
At the time it was huge.
04:57
Speaker A
Yeah, at the time it was huge and it was the first time that neural nets after being worked on for like 20 years had done anything useful.
05:03
Speaker A
And I remember learning about that and being like, man, if only I had been here a little bit early.
05:09
Speaker B
How could you?
05:10
Speaker A
I really missed the boat on this whole neural net thing.
05:13
Speaker A
Yeah.
05:14
Speaker A
Um, and so that, yeah, that really stands out to me.
05:17
Speaker A
That was a moment where I was like, man, it's just, it's going to be downhill from here.
05:20
Speaker A
Um, I was definitely wrong about that.
05:21
Speaker A
I'm still surprised at how good language models are.
05:25
Speaker B
Hmm.
05:26
Speaker A
And even when I left Google to found Cohere, an enterprise focused language modeling company.
05:32
Speaker A
Even at that time I was surprised with how, how good they are.
05:38
Speaker A
And I've continued to be surprised at how good language models are over the intervening six years.
05:42
Speaker B
So AGI, artificial general intelligence.
05:46
Speaker B
Is the thing that lots of people talk about, I think you could ask 12 people and they might have a slightly different definition.
05:52
Speaker B
What is your definition of AGI?
05:55
Speaker B
How do you think about it?
05:56
Speaker A
Artificial general intelligence just means a computer that you treat as a person.
06:00
Speaker B
And so we're not there yet.
06:02
Speaker A
No, we don't treat, we don't treat computers like people.
06:04
Speaker A
And if you use a computer, uh, like if you use an AI system now, like you use a language model for a while.
06:12
Speaker A
Pretty quickly you're going to realize it's really good at some things and really bad at some others.
06:17
Speaker A
And people who use language models for the most part don't interact with them like they would interact with a person.
06:23
Speaker A
They instead pretty quickly understand, hey, these are the things it's great at.
06:27
Speaker A
And these are the things that it's not very good at.
06:30
Speaker A
So when I think of AGI, I think of like, when will we have a technology.
06:35
Speaker A
That when you interact with, you expect it to behave as a person.
06:42
Speaker A
Like, I mean, you'd have a conversation like we're having now.
06:47
Speaker A
And we both are functioning independently in the world and I anticipate that you will go off and continue to do things after this podcast is done being recorded.
06:52
Speaker A
And none of that is the way I interact with a language model right now.
06:57
Speaker B
And why do you think that AGI is sort of the North Star that so many tech companies are building toward right now?
07:03
Speaker A
There's a few reasons for that.
07:05
Speaker A
One is the human one.
07:07
Speaker A
Like it's very human to want to talk to computers.
07:10
Speaker A
Um, it's very human to want to build more humans.
07:12
Speaker A
Like that seems to be a motivation that is independent of economics.
07:16
Speaker A
There's the other one that it's a really compelling narrative for raising capital.
07:20
Speaker A
It's really, really compelling for venture capitalists if you say, hey, I'm going to build infinite digital people.
07:26
Speaker A
What's wild is it turns out that's not even a good enough narrative anymore.
07:30
Speaker A
And now we, now the narrative has moved on to like, now I'm going to build a digital god.
07:34
Speaker A
Like, you know, artificial super intelligence.
07:36
Speaker B
So you think of AGI and super intelligence as sort of two different things.
07:39
Speaker A
I think of them as both pretty poorly defined marketing terms.
07:41
Speaker B
Right.
07:41
Speaker A
Okay.
07:42
Speaker A
And I, and if I was like, if I was to carry on my, my definition of AGI by saying it's a, it's a thing you treat as a person.
07:50
Speaker A
I would then say artificial super intelligence is a thing you treat as a god.
07:53
Speaker B
Hmm.
07:54
Speaker B
Yikes.
07:54
Speaker A
Yeah, we don't, we don't, but, but like we don't, we don't have technology for the first one.
07:59
Speaker A
We certainly don't have technology for the second one.
08:02
Speaker B
There are some people in Silicon Valley who believe that AGI could be achieved within the next decade.
08:08
Speaker B
What is your thought on that?
08:10
Speaker B
Is that realistic?
08:11
Speaker A
My thought is that in order to create a computer that behaves like a person.
08:16
Speaker A
We will need several independent spontaneous inventions that are disconnected or unrelated to the transformer neural network architecture that powers language models today.
08:22
Speaker B
Hmm.
08:23
Speaker A
And the view of pretty much everybody who works in the industry who isn't posting on Twitter.
08:27
Speaker A
Is that language models are a necessary but insufficient component of a human-like intelligence.
08:32
Speaker B
We're more than a decade away, it sounds like, in your mind.
08:34
Speaker A
Well, what I'm trying to say when I say several spontaneous independent inventions is that you can't make a, you can't make a prediction of when those are going to happen.
08:40
Speaker B
Hmm.
08:40
Speaker A
Because it's not the case that, oh yeah, we, like, it's not the case that we, yeah, we, we know what needs to be built, we just need to build it better.
08:47
Speaker A
The things that will need to invent to have a computer behave like a person.
08:52
Speaker A
Like, I don't know what they are.
08:53
Speaker A
So it's very, you can't make a timeline of when they're going to happen.
08:56
Speaker A
Like transformers have gotten a lot better over the past five years.
09:00
Speaker A
They're not getting more like people.
09:01
Speaker B
Right.
09:01
Speaker A
They're getting more like very useful parts of a computer.
09:05
Speaker B
I want to get both into what Cohere is doing, but sort of before we even get there.
09:10
Speaker B
I'm curious for your thoughts on the downsides of having AGI be the focus of AI development.
09:16
Speaker A
Yeah.
09:17
Speaker A
I mean, I think there's a handful of downsides of having it be the focus of AGI development.
09:20
Speaker A
Um, yeah, one of them is it's not clear that we want, want it or need it.
09:23
Speaker A
Like I don't wake up in the morning and say, oh God.
09:26
Speaker A
If only my computer was a person.
09:28
Speaker A
I don't really look into that.
09:30
Speaker A
Whereas I do wake up and say, man, if only my computer could do things for me better.
09:36
Speaker A
If only I didn't have to do that boring piece of work.
09:38
Speaker A
If only my computer could do that for me.
09:40
Speaker A
Like if only I could, you know, be more empowered as a thinker to sit behind a computer and have and have it help me with my work.
09:45
Speaker A
I think about that all the time.
09:47
Speaker A
I don't think about, oh no, if only there was more digital people.
09:50
Speaker A
So I don't, like, while I understand where that motivation comes from.
09:53
Speaker A
Like an intrinsic human thing, uh, fundraising thing.
09:58
Speaker A
Um, a belief that if there were digital people that would be helpful economically.
10:04
Speaker A
Like there's lots of very interesting questions and and things to think about in that context.
10:08
Speaker A
I don't wake up feeling that need.
10:10
Speaker B
It's interesting because there are some, I mean, Mark Zuckerberg has talked about, maybe one day everybody can have 15 digital friends.
10:16
Speaker B
That's a loneliness crisis.
10:17
Speaker A
Yeah.
10:18
Speaker A
That was a really bleak thing for him to say.
10:20
Speaker A
I don't feel that way.
10:21
Speaker A
I don't, I don't feel that way.
10:22
Speaker A
I think there is a loneliness crisis, I don't think the solution to the loneliness crisis is for people to talk to language models.
10:26
Speaker B
We've also heard these sort of extreme predictions that AGI super intelligence could pose an existential risk to humanity.
10:33
Speaker B
Do you worry that this technology is dangerous or does it just feel so far out?
10:38
Speaker A
Well, yeah.
10:39
Speaker A
I mean, let's, let's separate those two things.
10:41
Speaker A
There's one, does it pose an existential risk?
10:42
Speaker A
No.
10:43
Speaker A
No, no, I don't think it does.
10:44
Speaker A
Two, is it dangerous?
10:45
Speaker A
There's definitely ways that language models can be misused.
10:47
Speaker A
It's a really powerful technology.
10:49
Speaker A
Any powerful technology can be misused.
10:51
Speaker A
Anything that is so useful can be used well and can be used in a damaging way.
10:55
Speaker A
So there really are risks.
10:56
Speaker A
This is one of the things that I think the conversation around AGI hurts.
10:59
Speaker B
Hmm.
10:59
Speaker A
Right.
11:00
Speaker A
Like there are ways in which language models can be detrimental to society.
11:05
Speaker A
Misinformation is a big one.
11:06
Speaker A
Um, economic inequality is something I'm particularly worried about.
11:10
Speaker A
And I'm worried about the impact of an augmentative machine like language models.
11:16
Speaker A
What the impact of that is on income inequality.
11:19
Speaker A
Um, I think the discussion around existential threats posed by technologies that don't exist today.
11:25
Speaker A
Does a disservice to the people interested in talking about the threats that exist from the technology that does exist.
11:31
Speaker A
Hmm.
11:32
Speaker B
Yeah, I mean, we saw this letter that was signed by a bunch of AI researchers.
11:35
Speaker B
We should pause AI development because of the existential risk.
11:38
Speaker A
Yeah, I think that's a pretty, in retrospect, a pretty silly letter.
11:40
Speaker A
Right.
11:41
Speaker A
Like I, I think that isn't the discussion that's happening much anymore.
11:42
Speaker A
The conversation now is about things like trust and safety, things about like data privacy and misinformation and like things I'm, I think are much better conversations to be having for policy makers and researchers.
11:49
Speaker B
As sort of on that note, even if we don't reach AGI, do you worry about the ripple effects for people in the process of trying to make a more human-like AI?
11:56
Speaker B
And I think specifically about the instances where we've seen people form these really deep relationships with chatbots.
12:04
Speaker B
Start to wonder if they're sentient or conscious.
12:06
Speaker B
Is that something that like, even if we don't continue to advance the technology as rapidly as some people think that we will?
12:12
Speaker B
Is that something that you worry about on, on the way?
12:14
Speaker A
Yeah.
12:15
Speaker A
I mean, that's, that's an interesting question.
12:16
Speaker A
Because at Cohere, we're focused only on the work application of this, like we, we sell to large businesses.
12:22
Speaker A
We sell to enterprise and companies that use our model for automating and augmenting boring work.
12:25
Speaker A
Every time somebody doesn't have to do something boring at work, I feel great.
12:29
Speaker A
Um, we don't have to deal with a lot of these things.
12:30
Speaker A
There's a lot of these questions that are more about like, oh, well, what does it mean for the interpersonal or something?
12:35
Speaker A
And those are much more relevant to like a company that's charging 20 bucks a month to get a subscription for a consumer to use the chatbot in their personal lives.
12:40
Speaker A
That's not as relevant to our business.
12:42
Speaker A
But I do think it's something we should be thinking about.
12:44
Speaker A
Um, I don't know how exclusive that is.
12:47
Speaker A
Like how unique that is to AI.
12:49
Speaker A
Right.
12:50
Speaker A
Like to language models.
12:51
Speaker A
Like there are lots of people who can form bizarre bondings, emotional connections to inanimate things.
12:56
Speaker A
That problem existed well before language models.
12:58
Speaker A
Language models heightens it and makes it a little weirder.
13:00
Speaker A
I think the solutions to the language model version of that are probably the same solutions to the versions that happened before language models.
13:05
Speaker A
And those are like, you know, solving the loneliness epidemic.
13:06
Speaker A
Helping out with mental health.
13:08
Speaker A
Like coming up with better approaches for forming connections and relationships with people and things like that.
13:14
Speaker B
So I want to get a bit more into Cohere, you co-founded Cohere in 2019.
13:18
Speaker B
As I said, it's now reached nearly $7 billion valuation.
13:22
Speaker B
What is the company's mission if AGI is not your North Star?
13:27
Speaker B
What are you building?
13:28
Speaker A
Yeah, I mean, I mean, the, the, because at Cohere, we're focused only on the work application of this.
13:34
Speaker A
Like we, we sell to large businesses.
13:37
Speaker A
We sell to enterprise and companies that use our model for automating and augmenting boring work.
13:40
Speaker A
The mantra we say in Cohere a lot is like ROI, not AGI.
13:44
Speaker B
Hmm.
13:44
Speaker A
Um, which, which really just means like we want to build something useful with the technology today.
13:49
Speaker A
Like large language models can be doing so much more than they are doing now.
13:55
Speaker A
But they are blocked by things like data privacy, they're blocked by cost.
14:00
Speaker A
They're blocked by retention.
14:03
Speaker A
Um, they're blocked by performance, they're blocked by not having good enough interfaces, they're blocked by not being connected to the right data sources within a company, within an enterprise.
14:09
Speaker A
Um, they're blocked by not being trained on the right thing.
14:14
Speaker A
They're trained to like write great poetry and solve integrals.
14:18
Speaker A
But it turns out that's not what most people's work looked like.
14:21
Speaker A
You know, most people's work.
14:22
Speaker B
Thank goodness.
14:22
Speaker A
Yeah.
14:23
Speaker A
Instead they're like, yeah, they're processing documents, they're, they're searching through Slack, they're answering requests from customers.
14:30
Speaker A
You know, they're, they're checking out the formulas on a spreadsheet to make sure they put the right information in and stuff like that.
14:36
Speaker A
So those are all blocking neural nets and transformers, language models.
14:41
Speaker A
From alleviating the burden of boring work from people.
14:45
Speaker A
Um, and that's really what, what we're trying to do.
14:50
Speaker A
We're trying to make large language models as useful for as possible for people at work.
14:54
Speaker B
Can you give us some examples of things that you're enabling customers to do?
14:57
Speaker A
Yeah.
14:58
Speaker A
Yeah, I mean, I mean, responding to customer requests is a big one.
15:00
Speaker A
Right, so there's a handful of, uh, tech companies actually of our customers out there.
15:06
Speaker A
Who use our, who use, uh, command our model and North, which is our agentic platform.
15:12
Speaker A
Um, use that for responding to customer requests.
15:16
Speaker A
So tickets are now filed like, I don't know, significantly faster.
15:19
Speaker A
Um, there's customers out there that use that for banking, like financial institutes that use North for giving highly personalized information to their customers.
15:25
Speaker A
So you can imagine like a financial advisor sitting down with a customer and having a model bring up that customer's information immediately.
15:32
Speaker A
And all, summarize all their past conversations, be, give them able to help them out with the conversation.
15:37
Speaker A
And so they can be better bankers.
15:39
Speaker B
So my very important personal question is, when are we going to get AI to file our expenses for us?
15:42
Speaker B
This is the only thing I want to get.
15:44
Speaker A
That's such a good.
15:45
Speaker A
Yeah.
15:46
Speaker A
That's such a good question.
15:47
Speaker A
Like what, right, like if you're an employee at a company.
15:51
Speaker A
You have to file expenses.
15:53
Speaker A
Filing expenses requires going through your email.
15:58
Speaker A
Probably looking at your photos if you take pictures of receipts.
16:02
Speaker A
I don't know if that's.
16:03
Speaker B
That's how I would like it to work.
16:05
Speaker B
I would like to just take a picture of the receipt and it goes.
16:07
Speaker A
Have it filed.
16:08
Speaker A
It requires reading the policy from your company.
16:10
Speaker A
To know what you can and can't file.
16:12
Speaker A
To say like, hey, you can file dinner, but you can't file that beer.
16:15
Speaker A
You know, or something like that.
16:16
Speaker A
So it requires reading policy.
16:18
Speaker A
It requires having access to the tool within your company.
16:21
Speaker A
And it requires doing that in a fast and efficient and data secure way.
16:25
Speaker A
Those are all the things that, that we are addressing.
16:28
Speaker A
Um, have I, I have yet to, I have not filed an expense with our model yet.
16:32
Speaker B
Hmm.
16:33
Speaker A
Um.
16:34
Speaker A
But it, it has access to images, it has access to my email.
16:38
Speaker A
It has access, does it have, I don't think it has access to the system we use for, uh, for filing expenses.
16:43
Speaker A
But I, like, I think that's very soon actually.
16:45
Speaker B
When it happens, I want to.
16:46
Speaker B
I want to write that story.
16:47
Speaker A
Yeah.
16:48
Speaker A
I think that's the type of thing we can expect like that.
16:50
Speaker A
I would say this year.
16:51
Speaker A
That sounds very doable.
16:52
Speaker B
Love it.
16:52
Speaker B
Love it.
16:53
Speaker B
So Cohere was founded in Toronto, do you think that starting the company outside of the Silicon Valley bubble has helped you think differently about the approach to developing this technology?
16:59
Speaker A
I think we're all a little contrarian.
17:02
Speaker A
Like Aiden and I and I are the three co-founders are all a little contrarian.
17:05
Speaker A
And so we've always kind of taken a different approach.
17:07
Speaker A
You know, we're a global company, we now have offices in, in New York, offices in, in Europe and in Asia and in San Francisco as well.
17:13
Speaker A
But we have been headquartered in Toronto.
17:16
Speaker A
And like, we've never moved down to Silicon Valley.
17:18
Speaker A
There is a pretty big monoculture there.
17:23
Speaker A
And I think that's one of the reasons why you see what turns out to be a pretty insular worldview and insular philosophy.
17:30
Speaker A
Experienced as though it is the dominant one.
17:34
Speaker A
Right.
17:35
Speaker A
But the AGI thing is a really interesting, like it has been several, maybe two years now.
17:40
Speaker A
For every time I enter into a room and give a talk like this, I'll ask people how many of you think AGI is coming really soon.
17:45
Speaker A
And pretty much nobody does.
17:47
Speaker B
Wait, I would like to do that here.
17:48
Speaker A
Yeah, let's do that here.
17:49
Speaker B
Yeah.
17:49
Speaker A
Yeah, the question is, how many of you think we'll get AGI in the next few years, being three?
17:55
Speaker B
Zero hands.
17:56
Speaker A
Zero hands.
17:57
Speaker B
Zero hands.
17:57
Speaker A
So nobody.
17:58
Speaker A
Nobody.
17:59
Speaker A
I think Sam Altman only hangs out in rooms where the answer to that question is yes by everybody.
18:06
Speaker A
And I think that gives you a pretty weird view of what you're building.
18:11
Speaker A
You know.
18:12
Speaker A
And now that isn't to, like, to be clear, OpenAI is an awesome company.
18:16
Speaker A
And they've built great technology.
18:18
Speaker A
And they continue to build great technology.
18:20
Speaker A
And there's tons of amazing companies that have built excellent, useful stuff.
18:25
Speaker A
Surrounded in, like, in an echo chamber.
18:28
Speaker A
But it gives you a pretty warped understanding of the technology you're building and how it can be useful for people.
18:33
Speaker A
Yeah, so I think being outside of that has been helpful for our objectives.
18:36
Speaker B
So I want to take through just a couple of the other sort of hot button topics in AI.
18:40
Speaker B
The first being this idea that the AI market is a bubble that is waiting to burst.
18:44
Speaker B
Your fellow co-founder and Cohere CEO Aiden Gomez has said that Cohere is on the right side of the AI bubble.
18:49
Speaker B
Tell me more about that.
18:50
Speaker A
Do I think we're in a bubble?
18:51
Speaker A
Um, I think if you're building a business and you're setting up your financials based on the imminence of AGI.
19:00
Speaker A
And if you're building a business that is only going to be profitable when AGI comes.
19:06
Speaker A
You're in for a rocky time.
19:08
Speaker A
I think there's no way you can build a business off that, the bets that that technology shows up when you just keep pouring money into the compute factory and then out comes AGI.
19:15
Speaker A
I think there's no way you can build a sustainable business with that.
19:18
Speaker A
So I don't know if it's a bubble.
19:19
Speaker A
Um, but I know it's going to be a very difficult time for the companies that have bet their balance sheet on AGI happening in the next few years.
19:26
Speaker A
When Aiden says we're on the opposite side of that.
19:30
Speaker A
Um, is that we're building a very real business.
19:35
Speaker A
Based on the technology that we have today.
19:40
Speaker A
And as it gets better, you know, our offering gets better.
19:43
Speaker A
But we're building something that is useful, um, and has good economic fundamentals.
19:50
Speaker A
Without the need for several independent spontaneous inventions to get us to AGI.
19:54
Speaker B
Another big topic of conversation surrounding AI right now is what it's going to mean for all of our jobs.
19:57
Speaker B
And I'm curious for your take on this, especially as somebody who's working on automating processes within companies.
20:05
Speaker B
Do you think that we could see a jobs apocalypse because of this technology?
20:09
Speaker A
Let's talk about history for, for a little bit.
20:10
Speaker A
This is not the first rapid change in technology.
20:14
Speaker A
There have been several.
20:15
Speaker A
Right.
20:16
Speaker A
Um, there have been several within relatively recent history.
20:19
Speaker A
There's been a handful of things that have come around that have really changed a bunch of work.
20:22
Speaker A
Each one of those has resulted in a pretty drastic change in where people are employed and doing what.
20:26
Speaker A
Looking back on all of them, I think mostly people think they were a good idea.
20:30
Speaker A
And although they felt chaotic at the time, it was actually relatively slow.
20:35
Speaker A
And jobs in the economy are pretty resilient, it turns out.
20:39
Speaker A
And we figure out what people are great at doing and we get them to do that.
20:44
Speaker A
And we have a pretty flexible system.
20:46
Speaker A
Right, like before the Industrial Revolution, it was what, like 90 some percent of people were farming.
20:50
Speaker A
Now it's like two.
20:51
Speaker A
And largely everybody is pretty okay with that.
20:55
Speaker A
I think if we said, do you all want to go back to being farmers?
20:58
Speaker A
The answer is like, no.
20:59
Speaker A
No, no, we don't.
20:59
Speaker A
The machines are actually much better at farming.
21:01
Speaker A
Um, I do think this is going to have an impact on the nature of work.
21:04
Speaker A
In the same way that the personal computer changed the jobs that were being done.
21:07
Speaker A
Um, I don't think it's going to be a job apocalypse.
21:10
Speaker A
People are resilient and the economy is resilient.
21:12
Speaker A
And there's lots of things that language models are very bad at.
21:16
Speaker A
And we're going to keep making language models better.
21:18
Speaker A
But they're not going to get better at those things.
21:20
Speaker A
So I, I think this is going to change things.
21:24
Speaker A
But it's largely going to be seen as a good thing in the long term.
21:28
Speaker A
And it's going to be a little slower than people think.
21:30
Speaker A
But I want to be clear that that requires policy.
21:32
Speaker A
That requires government work.
21:34
Speaker A
Like out of the Industrial Revolution came really good unions.
21:38
Speaker A
You know, came really good workers rights.
21:40
Speaker A
Came the five-day work week.
21:42
Speaker A
Um, we should be thinking about things like that now.
21:44
Speaker B
That was a question for me because it, it feels like we often hear from people in the industry.
21:49
Speaker B
Like, yes, this is going to have an impact on jobs.
21:51
Speaker B
Yes, we need to be talking about this.
21:53
Speaker B
And then the question for me is like, well, who should be talking about it?
21:57
Speaker B
And what should they be doing about it?
21:58
Speaker B
So it sounds like you're, you think policy makers need to be having more serious conversations.
22:00
Speaker A
Policy makers and union leaders and, and employers and, yeah.
22:05
Speaker A
Those are the, the same people who talked about it in the last Industrial Revolution.
22:07
Speaker A
Need to be talking about it now.
22:10
Speaker B
So this idea of automating boring work, to me, makes a lot of sense inside of companies.
22:15
Speaker B
But I'm curious for everybody here, short of my company is going to come work with Cohere.
22:22
Speaker B
Are there ways to think about having AI do those tasks that you don't want to do in your everyday life?
22:27
Speaker A
Yeah, I mean, I think the, the advice is the same.
22:30
Speaker A
There for, for people using it in a personal setting.
22:33
Speaker A
I, I don't run into it in my personal life as often as I run into it in my work life.
22:36
Speaker A
I'm not really trying to do my personal life faster.
22:39
Speaker A
I'm not trying to respond to texts to my friends faster.
22:41
Speaker A
I'm trying to do it more often.
22:42
Speaker A
But, but I'm not trying to do it faster.
22:45
Speaker A
I'm not trying to do it with a machine to help me.
22:47
Speaker A
Um, there are things in your life that you definitely should get a neural net to do.
22:50
Speaker A
A language model to do.
22:52
Speaker A
The heuristic I use and I use for our customers as well, is like, you know, as an employee or as a person.
22:58
Speaker A
Think about the tasks that you do where you know how to do it, you know the information is out there.
23:03
Speaker A
It's all on the computer.
23:05
Speaker A
It's just going to take you time.
23:06
Speaker A
And those are the things that you should probably get a large language model to help you with.
23:11
Speaker B
Are there things that you do use AI in your personal life for?
23:14
Speaker A
I'm constantly asking questions of the internet.
23:18
Speaker A
And I find neural net, like Perplexity is a great app.
23:20
Speaker A
You know.
23:21
Speaker A
Um, it's great to have a question and then have a model synthesize all a bunch of different sources to give you an answer.
23:25
Speaker A
I often check in on those sources afterwards in the same, um, but that's a super useful one.
23:29
Speaker A
I also do a lot of like vibe coding just for fun.
23:31
Speaker A
It's very enjoyable to like have little things I want to build and, you know, use a large language model for helping me build that.
23:36
Speaker A
Um, yeah, so I use that a lot.
23:37
Speaker B
Um, you also lead a band.
23:38
Speaker B
People may or may not know in the industry called Good Kid.
23:40
Speaker B
Uh, as a musician.
23:42
Speaker B
What is your take on the AI generated music that has topped the charts recently?
23:46
Speaker A
Oh, that's another good question.
23:47
Speaker A
That's another good question.
23:48
Speaker A
Um, yeah.
23:50
Speaker A
I have been asked over the years.
23:53
Speaker A
You know, do I use neural nets to write lyrics?
23:55
Speaker B
Hmm.
23:55
Speaker A
And the answer to that has always been no.
23:57
Speaker A
Because I'm not looking to write lyrics faster.
23:59
Speaker A
I'm not trying to optimize my creative process.
24:02
Speaker A
I do sometimes use our language model, like I'll write lyrics to a song.
24:06
Speaker A
And then I'll put them in the language model and I'll be like, tell me what this is about.
24:10
Speaker A
And if it gets it completely wrong, I'll be like, okay, well, no one knows what the hell I'm talking about.
24:13
Speaker A
Or if it gets it completely right, I'll be like, yeah, maybe I, you know, maybe I should try to make it a little more nuanced.
24:17
Speaker A
So I use it as kind of something to bounce ideas off of.
24:20
Speaker A
But I don't use it for writing because I'm not trying to write faster.
24:22
Speaker A
Now, the recent wave of generated music dominating the charts is an interesting one.
24:25
Speaker A
When I see people engage with art and when I see people even engage with our, our art, like our band.
24:30
Speaker A
Um, you know, there's a huge emphasis on the person that made the art.
24:34
Speaker A
Like I think about there was a few years back, there was Christopher Nolan's Oppenheimer.
24:38
Speaker A
And like, the fact that Christopher Nolan made it was a big part of why people were excited to go see that movie.
24:43
Speaker B
You want to connect to another person.
24:44
Speaker A
Yeah.
24:45
Speaker A
You want to connect to another person.
24:46
Speaker A
Yeah, so that's mostly why we consume art.
24:48
Speaker A
That's not only why we consume art and there are some places where very passive art is actually great.
24:52
Speaker A
You know, elevator music is the name.
24:53
Speaker A
And there's going to be a few artists who figure out how to make AI art that touches something in people.
24:58
Speaker A
And people will connect with that digital thing in the same way that there's tons of people going to Hatsune Miku concerts.
25:02
Speaker A
And they have been for 20 years.
25:04
Speaker A
Right, like that's a, you know, that's something that people have connected with that digital persona.
25:07
Speaker A
I'm sure there's going to be a handful of things like that.
25:12
Speaker A
But ultimately, I don't think you'll see completely generated music that is culturally relevant.
25:16
Speaker B
Well, Nick Frost, thank you so much for doing this.
25:19
Speaker B
Thank you to this wonderful audience.
25:21
Speaker B
This is really interesting.
25:22
Speaker A
Thanks for having me.
Topics:Artificial General IntelligenceAGIArtificial IntelligenceNick FrostCohereSam AltmanNeural NetworksMachine LearningAI ResearchTransformer Models

Frequently Asked Questions

What is the definition of AGI according to Nick Frost?

Nick Frost defines AGI as a computer system that you treat as a person, capable of independent thought and action, unlike current AI systems which are specialized and limited.

Does Nick Frost believe AGI will be achieved soon?

No, Nick Frost is skeptical about AGI arriving in the near future and believes it will require several independent inventions beyond current transformer-based AI models.

What motivates the pursuit of AGI in the tech industry?

The pursuit of AGI is motivated both by a human desire to create intelligence in our own image and by compelling narratives that attract venture capital investment.

Get More with the Söz AI App

Transcribe recordings, audio files, and YouTube videos — with AI summaries, speaker detection, and unlimited transcriptions.

Or transcribe another YouTube video here →