Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
BBC Sounds, music radio
0:03
podcasts. Hello lovely curious-minded
0:05
people, welcome to Inside Science, a
0:08
program that was first broadcast on
0:10
the 27th of February 2025. I'm
0:13
Victoria Gill, and today we are
0:15
going to delve straight into a
0:17
scientific revolution, artificial intelligence, and its
0:20
transformational role in science. Over the
0:22
next half hour, we're going to
0:24
unravel the power that's promised and
0:27
the threats that are posed by
0:29
this transformational technology. We'll investigate whether
0:32
AI really solved a major microbiological
0:34
mystery that had taken human scientists
0:36
years to crack in just a
0:38
few days, and we're examining the
0:41
bizarre world of AI fakery and
0:43
how it's making its way into
0:45
scientific papers. To navigate all of
0:47
this we have technology expert and
0:49
science communication lecturer Gareth Mitchell in
0:51
the studio. Hello, Gareth. Hello, nice to
0:53
be here. It's very nice to have you.
0:55
And this is right up your street, isn't
0:58
it? Oh, very much. Oh, I'm having such
1:00
a good program. Yes. You know, because I'm
1:02
so fascinated by machine learning. I've been reporting
1:05
on it for years. I know you have
1:07
as well, Vic. And I'm fascinated by the
1:09
way science happens. And so the story we're
1:11
talking about today brings both of those bumping
1:14
into each other. amazing consequences. You know for
1:16
good we hope but you know I've spoken
1:18
to some scientists who are already sounding a
1:21
bit skeptical about it as well so this
1:23
has put like a cat among the pigeons
1:25
a little bit so yeah it's a story
1:27
that matters for the world of science but
1:29
by default it means it matters for the
1:32
rest of us. Yeah absolutely let's get right
1:34
into it then Gareth you have been speaking
1:36
to a scientist who had a bit of
1:38
a bewildering experience when he tested a new...
1:41
AI tool is that right? Yeah absolutely yes
1:43
so this was at Imperial College where
1:45
I went to speak to Dr. Tiago
1:47
Costa and he and his team have
1:50
been testing a tool that's being developed
1:52
by Google and this is called the
1:54
co-scientists so I suppose the clues in
1:56
the name a little bit about what
1:59
this is about. And so far
2:01
this tool is being aimed specifically
2:03
at biomedical applications. And Tiago and
2:05
his team, they study bacteria, but
2:07
also these little things called phages.
2:10
And I hope you don't mind
2:12
a bit, but I might put
2:14
you on the spot here. Do
2:17
you know? as a top correspondent
2:19
and presenter. What are phages? Top
2:21
correspondent, I'm going to take that.
2:23
Okay, I mean, without referring at
2:26
all to my script and any
2:28
preparatory conversations I've had to do.
2:30
So phages are like little tiny
2:33
viruses that can infect bacteria, right?
2:35
Indeed, yes. And they do so
2:37
through these little tails. And it's
2:39
through the tail that genetic material,
2:42
DNA, in the phage, goes into
2:44
the bacteria. And there's some excitement
2:46
around them. I mean, scientific, they're
2:49
very interesting, but potentially, therapeutically as
2:51
well. So these phages matter. And
2:53
we're talking about phages quite a
2:55
lot in this interview, or about
2:58
two here. So, yeah, you have
3:00
the phage, has some DNA in
3:02
it. And there's a mechanism for
3:05
that DNA going into the bacteria.
3:07
So you spoke to Tiago about
3:09
his experience with co-scientists. So let's
3:11
hear more from that conversation. Phage
3:14
is a entity made by proteins.
3:16
This very big large ball with
3:18
the DNA inside and a tube
3:21
through which the DNA gets injected
3:23
into the bacteria. This non-infectious phages
3:25
are lacking a tail. Okay, and
3:27
the tail is what is important
3:30
for a phage to bind the
3:32
surface of a bacteria and inject
3:34
the DNA. What we found was
3:37
that the DNA of this non-infectious
3:39
phages were embedded in different bacterial
3:41
species and we didn't know why
3:43
and how it was reaching there.
3:46
So you were trying to find
3:48
out how this process works. get
3:50
down to. So you need to
3:53
sit down at some point and
3:55
think we think it might be
3:57
this thing as I understand and
3:59
then in science you have to
4:02
then go and test your hypothesis.
4:04
But Google came to you didn't
4:06
they at one point and said
4:09
we've been working on something we
4:11
want you to help us try
4:13
it out. Is that how it
4:15
happens? asked this to basically to
4:18
understand whether the scientific hypothesis that
4:20
the AI system was generating were
4:22
valid. And the only way to
4:25
validate those hypotheses is actually go
4:27
to the lab and test whether
4:29
those hypotheses are correct or not.
4:31
Simple. system looked like? I'm just
4:34
trying to think is it a
4:36
bit like ChatGPT? Well actually we
4:38
didn't have direct access to the
4:41
system because this was still being
4:43
underdeveloped by Google so what we
4:45
did was to send them what
4:47
our scientific question was and a
4:50
few lines of introduction and a
4:52
few references. So what were you
4:54
typing into the Google co-scientists to
4:57
kind of get the hypothesis that
4:59
you wanted? So it was a
5:01
very simple question. was how these
5:03
phages, which are non-infectious, are able
5:06
to infect bacteria from different species
5:08
and integrate themselves into their DNA.
5:10
What was the mechanism? So after
5:13
a few years of research, we
5:15
understood that what those non-infectious entities
5:17
were doing, they were ejecting the
5:19
tales from different phages. in order
5:22
to inject their DNA, which was
5:24
encapsulated in this ball-shaped structure on
5:26
top of the tail. So we
5:29
knew the answer to the question
5:31
before we challenged the AI system.
5:33
So the question was, like, how
5:35
are these pages getting their DNA
5:38
into the bacteria, even though they
5:40
don't have the equipment, the mechanism,
5:42
the means, really, to do anything?
5:45
Yes, that was exactly the question.
5:47
And then, two days later, the
5:49
AI system came up with five
5:51
different hypotheses. So the algorithm spits
5:54
this out for you after 48
5:56
hours. What was your reaction when
5:58
you saw this? Well, it was,
6:01
we were very surprised because basically
6:03
the main hypothesis was a mirror.
6:05
of the experimental results that we
6:07
have obtained and took us several
6:10
years to reach. So it was
6:12
a shocking, I would say. Is
6:14
it possible though, just from the
6:17
literature you'd already put out, because
6:19
you'd be putting out a preprints
6:21
and stuff as you go along,
6:23
presenting it at conferences, that this
6:26
had... kind of crept in as
6:28
it would do into Google's data
6:30
set and so really it was
6:33
just reciting your own research back
6:35
to you or not. No, so
6:37
this this preprint was was kept
6:39
secretive for a while. Why? Because
6:42
we thought our opportunity to patent
6:44
this technology which we did so
6:46
that so that's the reason why
6:49
this did not become public until
6:51
the patent was filed. And during
6:53
this process, we will start to
6:55
talk with Google. There's no way
6:58
that Google will have access to
7:00
our preliminary data. while they were
7:02
generating the scientific aquatics. Some of
7:05
this seems a little bit opaque
7:07
to me and I don't know
7:09
if Google have paid you to
7:11
do this with search either. No,
7:14
there was zero funding, so this
7:16
was a benefit for both, but
7:18
zero pounds involved in this. As
7:21
we all know, anybody who's used
7:23
any of these LLLMs and chat
7:25
bots, well no, they hallucinate. They
7:27
can come up with stuff that
7:30
seems plausible, but is... wrong. So
7:32
one of the of the hypotheses
7:34
that has generated, we have never
7:37
thought about that. And from the
7:39
preliminary data that we already have,
7:41
it looks very, very likely that
7:43
that's a novel mechanism of DNA
7:46
transfer. Okay, so it not only
7:48
is validated, our experimental work, but
7:50
it has generated a novel hypothesis
7:53
that we have never thought about.
7:55
And the preliminary data we have,
7:57
it's looking, it looks promising. Were
7:59
any of these hypotheses that are
8:02
generated complete nonsense? I would say
8:04
nonsense, but I would say that
8:06
are less relevant to the question.
8:09
Do not answer directly the question.
8:11
And this is why it's so
8:13
important. the human brain to critically
8:15
evaluate those hypotheses and not take
8:18
it as any of the hypotheses
8:20
as the final answer or the
8:22
finitive answer to the scientific question.
8:25
I think you say that with
8:27
genuine belief as somebody who is
8:29
paid to be a scientist and
8:31
doesn't want to be replaced any
8:34
time soon. Yeah, well, definitely this
8:36
system won't replace humans. That's for
8:38
sure. So it won't replace humans,
8:41
that's somewhat comforting. This is such
8:43
an intriguing story though, and we're
8:45
going to try and unpack exactly
8:47
what happened here. To help with
8:50
that, we have Maria Leah Carter
8:52
from Queen Mary University of London,
8:54
who is also a Turing AI
8:57
fellow. She is professor of natural
8:59
language processing and some of her
9:01
research focuses on the benefits and
9:03
limitations of AI in science. Hi,
9:06
Maria, welcome to Inside Science. Hi,
9:08
I'm really glad to be here.
9:10
Well, it's a pleasure to have
9:13
you and to hold our hands
9:15
through how exactly this works. So
9:17
many of us have at least
9:19
played with things like ChatGPT or
9:22
Google's Gemini. Is Google co-scientist something
9:24
very different from those chat bots
9:26
that we might be more familiar
9:29
with? How does it work? So
9:31
co-scientist is based on... technology, so
9:33
large language models itself is a
9:35
much more complex system than a
9:38
single large language model. But basically
9:40
how large language models work. They
9:42
are state-of-the-art probabilistic algorithms that have
9:45
been trained on large amounts of
9:47
data and they are trained to
9:49
generate outputs given a particular inputs.
9:51
Right. And inputs are prompts. So
9:54
they usually are texts, like a
9:56
phrase, a sentence, a question, or
9:58
more complex instructions. So when we
10:01
are asking a question to an
10:03
LLLM, we are providing it essentially
10:05
with a prompt to generate an
10:07
output. Right. I sort of see
10:10
it as the hovering up all
10:12
of this information and by listening
10:14
to it consuming it, when you
10:17
give them a prompt, they're working
10:19
out what the most probable... Next
10:21
word, answer to a question is
10:23
from all of that text and
10:26
info that they've consumed. Is that
10:28
fair if massively oversimplified? It's simplified
10:30
because they are making some really
10:33
complex associations. So they're not just
10:35
predicting the next word, but essentially
10:37
they're predicting what would be an
10:39
appropriate output given the prompt that
10:42
you've given. Right. And Google described
10:44
co-scientist as a multi-agent. systems, multiple
10:46
LLLM's, large language models. What does
10:49
that mean in practice? So that's
10:51
correct. So co-scientist itself is not
10:53
an LLEM, it's rather a coalition
10:55
of LLLM's, and they each specialize
10:58
in a different task. And so
11:00
what this multi-agent architecture means is
11:02
that you have... multipleLLMs that are
11:05
interacting with each other to generate
11:07
hypothesis that are evaluated and further
11:09
refined. And this kind of high-level
11:11
supervisor agent which coordinates all the
11:14
others, I would say, is the
11:16
main technical novelty in the system.
11:18
And I find something quite reassuring.
11:21
about that in a way Maria
11:23
because we have a lot of
11:25
discussions about artificial general intelligence and
11:27
you know this idea that we
11:30
before long we're going to end
11:32
up with an amazing brain and
11:34
you just say hey brain how
11:37
do you cure a range of
11:39
diseases and it'll just do it
11:41
clearly that's probably never going to
11:43
happen certainly not in the way
11:46
that I've described this Maria, from
11:48
what you're saying, when you have
11:50
multi-agents, it's just lots of LLLMs
11:53
and other machine learning systems that
11:55
specialize in certain things. One of
11:57
them might have generates some of
11:59
the initial hypotheses. One might run
12:02
the tournament that matches one off
12:04
against the other and works, which
12:06
is the best, like a ranking
12:09
kind of thing. In other words,
12:11
this is artificial, very specific intelligence.
12:13
It's going the opposite way from
12:15
this idea, artificial general intelligence, isn't
12:18
it? Yes, I think the trend
12:20
is to go for more sort
12:22
of smaller... more specialized systems than
12:25
having just one big system that
12:27
does everything. And sort of the
12:29
the idea here is that not
12:31
only is it LLLMs but they
12:34
also use external tools like web
12:36
searches and access to online databases.
12:38
So it's kind of a quite
12:41
a complex pipeline that consists of
12:43
a lot of different components. One
12:45
thing that came out from the
12:47
interview with Tiago is that this
12:50
co-scientist came out with one or
12:52
two hypothesesacies. that we're not anything
12:54
to do with existing literature or
12:57
any training data that was already
12:59
there. These seem to be absolutely
13:01
novel and I was rather stunned
13:03
by that, Maria. Is that something
13:06
that you might have expected from
13:08
such a co-scientist as well? So
13:10
I think that this is possible.
13:13
So, I mean, a novel hypothesis
13:15
can be created by essentially generating
13:17
outputs that synthesizes and stems from
13:19
indirect associations, right? So that you
13:22
have these kind of... complex context
13:24
that you wouldn't necessarily put together,
13:26
but by having this kind of
13:29
iterative hypothesis generation and refinement, it
13:31
can happen. Right. So I can
13:33
see there how those tools by
13:35
coming up with things that scientists
13:38
hadn't spotted could help scientists make
13:40
progress quickly, but what are the
13:42
pitfalls of this approach? So what
13:45
is presented here is a very
13:47
very complex pipeline. It's very resource
13:49
intensive. This is not really... discussed
13:51
very much in the paper, which
13:54
by the way is more of
13:56
a showcase paper than actually revealing
13:58
a lot of the technical details.
14:01
But you know, the fact that
14:03
it's so resource intensive means that
14:05
it's not something that would be
14:07
very readily replicable by scientists, so
14:10
they would have to sort of,
14:12
you know, be working with someone
14:14
like Google to do this. So
14:17
what do you think of the
14:19
potential dangers of scientists overestimating and
14:21
over relying on... on technology like
14:23
this? My main concern is becoming
14:26
complacent and not really understanding how
14:28
hypothesis are generated, understanding the reasoning
14:30
process itself, you know, having the
14:33
background knowledge. I mean, at the
14:35
moment, there are quite a few
14:37
limitations to this system in particular,
14:39
but assuming that these can be
14:42
very much improved in the future,
14:44
I think this is a concern
14:46
I have. Yeah. Well, Maria Liakata
14:49
from Queen Mary University of London,
14:51
thank you very much indeed for
14:53
taking us through that. So it
14:55
seems kind of to come back.
14:58
Gareth to this idea that co-scientist
15:00
isn't a scientist, it's not a
15:02
human, it's a tool, and understanding
15:05
the limitations and how that tool
15:07
works is really important for science.
15:09
Yeah, hugely, and by sheer good
15:11
fortune I happened to spend a
15:14
day with the whole room full
15:16
of early career scientists yesterday, so
15:18
I asked them what they thought,
15:21
and I was reassured that there
15:23
didn't seem to be too much
15:25
potential complacency setting in, if anything,
15:27
they were really quite cautious about
15:30
systems like this. they were worried
15:32
also about their data, you know,
15:34
because they'd have to give a
15:36
lot of their data to feed
15:39
into the AI and prompt in
15:41
order to get whatever results out.
15:43
and they had worries about their
15:46
ownership of that data and then
15:48
what might come out of the
15:50
machine the other side. And also
15:52
that for them, hypothesis generation is
15:55
a key really important professional skill
15:57
for any scientist, especially in early
15:59
career scientists. It is lovely that
16:02
you have a machine that will
16:04
help you with your hypothesis, but
16:06
unlike your labmate who might rip
16:08
with you about your hypothesis and
16:11
give you a few ideas, you
16:13
can't say, well, that's a genius
16:15
idea. Where did that idea come
16:18
from? You can't ask a black
16:20
box how it did that. So
16:22
I think they were healthily skeptical
16:24
about it. On the other hand,
16:27
there's huge pressure in science to
16:29
get out there and publish. So
16:31
if you can use tools that
16:34
are going to speed up your
16:36
pipeline that your rival lab isn't
16:38
using, you know, you can see
16:40
some temptation there as well. Yeah,
16:43
which is, you know, where these
16:45
kind of pitfalls interact with all
16:47
of that promise, right? So I
16:50
should say that Google told Inside
16:52
Science that co-scientists is still in
16:54
development, is currently available via their
16:56
trusted tester program, and that's a
16:59
program that research institutions can apply
17:01
to join. Now though,
17:03
AI has also started making its
17:05
presence felt in scientific papers, specifically
17:07
generative AI, which can create new
17:09
content based on all the information
17:11
it's been trained on, including text
17:13
or images. Garif, you have some
17:15
striking examples for me. Yes, I
17:18
do. This one, I'm blushing just
17:20
thinking about this. It was a
17:22
paper, there are the dry title
17:24
really, of cellular functions of spermatogonial
17:26
stem cells. It appeared in the
17:28
journal Frontiers in cell development and
17:30
biology. And there was an image
17:32
that went viral on social media.
17:34
I remember this. Machine learning generated
17:36
images. And it was of a
17:38
rat, okay. And we can all
17:40
remember, you know, in our school
17:42
textbooks, maybe a biological picture of
17:44
a rat where you can see
17:46
it with its, you know, an
17:48
artist's impression of, you know, the
17:50
fur, the whiskers, and it looks
17:52
quite cute, but then maybe a
17:54
bit of it's being cut away
17:56
as a kind of a kind
17:59
of dissection. Yeah. And that was
18:01
the case with this beautiful, penis.
18:03
But the only problem was that
18:05
the penis that was also shown
18:07
in that dissection cutaway style where
18:09
you could see all the vessels
18:11
and the tissue inside. It was
18:13
nearly as thick as the rat
18:15
was wide and was so large
18:17
it extended outside the frame. It's
18:19
about twice the size of the
18:21
rat's body. About twice the size
18:23
of the actual rat and some
18:25
very purerile people on the internet
18:27
thought it was incredibly shareable and
18:29
red places all around and the
18:31
journal did retract the article and
18:33
apologize. I mean it was a
18:35
fairly well-known journal as well wasn't
18:38
it? And it's some of the
18:40
labelling on that diagram made no
18:42
sense whatsoever. Like Sarah Goma cell
18:44
I think it was and synctolic
18:46
stem cells. It also did point
18:48
to rats and got that... correct
18:50
and said this was indeed a
18:52
route. But this is just the
18:54
tip of the iceberg. AI generated
18:56
images are making an increasingly frequent
18:58
appearance in published scientific papers. So
19:00
is that a problem? Joining us
19:02
now is image integrity analyst Jana
19:04
Christopher. Hi Jana, welcome to the
19:06
program. Hello and thanks for having
19:08
me. It's absolute pleasure. Now it's
19:10
your job to check images in
19:12
research papers before they're accepted for
19:14
journal publication. Is that right? AI
19:17
must have... must have really changed
19:19
things for you then? Yeah, with
19:21
AI, obviously things have changed. We
19:23
were talking there, you know, somewhat
19:25
tongue in cheek about an image
19:27
that was very obviously and ridiculously
19:29
AI produced, but how difficult generally
19:31
in your job is it to
19:33
distinguish an AI produced image from
19:35
a genuine image? I mean in
19:37
a nutshell is very difficult and
19:39
it can almost be impossible. So
19:41
at this point, AI image generator
19:43
is still... occasionally make mistakes. If
19:45
you have an image of a
19:47
human or an everyday object that's
19:49
AI generated, then you might find
19:51
flaws like extra digits or body
19:53
parts that don't align or things
19:55
like that. But when you're talking
19:58
about histology images, for example, it
20:00
becomes much more difficult. So, Jana,
20:02
can you just explain what you
20:04
mean by a histological image? But
20:06
by that I mean a microscopy
20:08
image showing a piece of tissue
20:10
on a cellular level so that
20:12
you can see all the details.
20:14
Right. So some of these more
20:16
complex scientific images, you're delving into
20:18
kind of more detail and more
20:20
complexity and so it's difficult to
20:22
start. Yes, and it's hard to
20:24
know, you know, when there's like
20:26
a little glitch, it's hard to
20:28
know whether that's actually in the
20:30
tissue or whether that's something to
20:32
do with, you know, with the
20:34
AI generation. So there was an
20:37
interesting study actually with over 800
20:39
participants published in Nature in November
20:41
last year. that they studied the
20:43
ability of human subjects to discriminate
20:45
between artificial and genuine histological images.
20:47
And they found a clear difference
20:49
between naive and expert test subjects.
20:51
So an example for an expert
20:53
would be an oncologist studying liver
20:55
cancer looking at images of liver
20:57
cells, right? So whereas naive participants
20:59
only classified about half of the
21:01
images correctly. the experts performed significantly
21:03
better with a hit rate of
21:05
about 70%. So I guess this
21:07
might give us hope that that
21:09
some of the incidences might be
21:11
picked out doing peer review. But
21:13
experience shows that peer reviewers rarely
21:16
pay that much attention to manuscript
21:18
figures, and they might actually need
21:20
to be prompted to do so.
21:22
Right. They're more focused on the
21:24
text. And you can see how,
21:26
if you're presenting data looking, say,
21:28
at the impact of a treatment
21:30
on cancer cells, for example, that
21:32
that's a real problem if that's
21:34
being faked by AI, how widespread
21:36
is the problem, do we know?
21:38
We don't really know how many
21:40
papers have AI-generated images in them.
21:42
simply because we lack reliable tools
21:44
to detect them. We're seeing an
21:46
exponential growth of academic articles published
21:48
per year over the last decade,
21:50
whilst the time spent on obtaining
21:52
the results and validating them and
21:54
peer-reviewing them has decreased significantly. We've
21:57
seen some journals trying to tackle
21:59
this directly. has actually banned publication,
22:01
the use of AI-generated images in
22:03
scientific papers, but how else do
22:05
we tackle this issue? What can
22:07
be done? In terms of guidelines,
22:09
most journals still permit the use
22:11
of gen AI and large language
22:13
models like ChatGPT to improve the
22:15
readability of their own writing, of
22:17
course. However, they are accountable for
22:19
the accuracy of their publication, and
22:21
any use of AI must be
22:23
disclosed. So do you think academics
22:25
need to be specifically trained in
22:27
spotting? Well, it's useful if scientists,
22:29
you know, if the readership is
22:31
able to spot these things, but
22:33
I suppose, you know, it's not
22:36
going to be possible without tools.
22:38
And so the publishers and journals
22:40
are really at the front line
22:42
of this, and they are responding
22:44
to what many regard an integrity
22:46
crisis in scientific publishing. And they're
22:48
building up their defenses by expanding
22:50
the integrity department, but we also
22:52
have a choice of image integrity
22:54
tools, all of which mainly look...
22:56
for image duplications and these detection
22:58
tools are also attempting to detect
23:00
AI generated images. Is that using
23:02
AI to detect AI? That's right,
23:04
exactly. That's right, and at this
23:06
point it has to be said
23:08
that they are very unreliable, unfortunately.
23:10
Well thank you very much, Jano
23:12
Christopher, from the Federation of European
23:15
Biochemical Societies. And thank you, Gareth
23:17
Mitchell, for taking us on this
23:19
fascinating deep dive into AI. I
23:21
think it's probably raised more questions
23:23
than answers. Certainly has. And our
23:25
rough images will stay with me
23:27
for a while. It's been a
23:29
real pleasure, though, but thank you.
23:31
Absolutely pleasure. Now, though, we're going
23:33
to end today by turning away
23:35
from AI-generated content and looking up
23:37
at the night sky, because January
23:39
and February, when it's not been
23:41
raining, have given us some star-gazing
23:43
highlights, have been visible. I'm joined
23:45
now by Catherine Haymans, Astronomer Royal
23:47
for Scotland. Hello Catherine. Hello Vic,
23:49
have you been looking up in
23:51
the night sky and admiring the
23:53
planetary parade? Well I had a
23:56
little glimpse. The night before last,
23:58
but the cloud has just been,
24:00
as per usual, really dominating my
24:02
view of the night sky. But
24:04
yeah, I did have a little
24:06
bit. You'll often find me, Vic,
24:08
shaking my fists at the clouds.
24:10
Yeah, well, I'm in the northwest
24:12
of England. You're in Scotland, maybe
24:14
not best, with the most stargazing,
24:16
friendly weather. But talk history, Catherine.
24:18
What exactly is a planetary parade?
24:20
It's such a planetary parade. It's
24:22
such a lovely phrase. It is a
24:24
lovely phrase isn't it? As I said
24:27
for the last few months we've had
24:29
six planets up in our night sky
24:31
and in the last in this week
24:33
Mercury is joining the pack, so that
24:35
completes your planetary bingo card. You've got
24:37
Mercury, Venus, Earth. We're sat on, of
24:40
course, Mars, Jupiter, sat, Uranus, Neptune, all
24:42
up in the night sky, about half
24:44
an hour after sunset. You don't need
24:46
to stay out really late at night.
24:48
You don't need to get up early
24:50
in the morning, just eat your dinner,
24:52
go out, and tick all of those
24:54
planets off your card. The solar system on
24:57
parade. How often does this happen? How rare
24:59
is this happen? Yeah, so there are always
25:01
planets up in the night sky to look
25:03
at the earth, goes around the sun once
25:05
every year, so at some point during the
25:07
year you'll be able to see where the
25:09
planets are. But to have all seven up
25:12
in the night sky at the same time,
25:14
it isn't going to happen again until 2040,
25:16
so this is quite rare. But some of
25:18
my astronomy colleagues are a little bit grumpy
25:20
about the hype because actually it's not the
25:22
best time to see some of the planets.
25:25
So we've satin has been beautiful in our
25:27
night sky in our night sky up until...
25:29
a couple of weeks ago and now it's
25:31
getting really close to the sun and so
25:33
to see Saturn you're seeing it in the
25:35
glare of the sunset on the western horizon
25:38
as it's really hard to see Saturn at
25:40
the moment and Mercury the same it's only
25:42
just popped out from the glare of the
25:44
sunset and in a few weeks time it's
25:47
going to be much easier to see Mercury
25:49
but by then Saturn will have gone. So
25:51
we're kind of in this sweet spot right
25:53
now where we've got all seven up. actually
25:55
really hard to see mercury in Saturn at
25:58
the moment and also Uranus and Nietzsche. you
26:00
always need to telescope for anyway. But
26:02
Venus, really easy to see, super bright
26:04
in the West if you... think it's
26:06
an airplane but it's not moving, that's
26:08
Venus. You've got Jupiter right up above
26:10
your head at the moment just after
26:12
about six o'clock in the evening and
26:14
if you draw a line in your
26:16
mind between Venus and Jupiter from the
26:18
west across the Jupiter and then extend
26:20
that in an arc all the way
26:22
across to the east then you will
26:24
hit the red planet of Mars. Is
26:26
there a time that should we be
26:28
going out in the middle of the
26:30
night? Is this right after sunset when's
26:32
the best time to get the best
26:34
display? If you want all seven planets
26:36
at the same time, you've got a
26:39
very short window, about half an hour
26:41
after sunset. So head out about six
26:43
o'clock. I would advise people to download
26:45
an app. on your smartphone to cheat.
26:47
So there are lots of different star
26:49
apps. You can get one called Stellarium
26:51
that's free. And then you can just
26:53
point your smartphone up at the night
26:55
sky and it will tell you where
26:57
everything is. And that's a much easier
26:59
way of particularly Uranus and Neptune, which
27:01
you can't see with your own eye
27:03
anyway. But as the night goes on,
27:05
Mercury and Saturn will set. But you've
27:07
still got Venus Jupiter and Mars shining
27:09
bright. for a lot of the night.
27:11
And that will carry on throughout the
27:13
rest of March. And mercury is going
27:15
to get easier to see as well.
27:17
Wonderful, get your coats on, take your
27:19
smartphones and then once you've picked out
27:21
the planets, put the phones away and
27:23
just stare up at the night sky.
27:25
Thank you very much indeed Catherine. It's
27:27
been an absolute pleasure to talk to
27:30
you about the planetary parade. But that
27:32
is all the night sky wonder. But
27:34
that's an absolute pleasure to talk to
27:36
you about the planetary parade. But that
27:38
is all the night sky wonder and
27:40
disturbing AI-generated imagery that we have time
27:42
for this week. You have been listening
27:44
to BBC inside science with me, Victoriais.
27:46
Wales and West. Do you think you
27:48
know more about space? than
27:50
we do, head to
27:52
to .co .uk, search for
27:54
BBC Inside Science and
27:56
follow the links to
27:58
the Open University the
28:00
try the Open
28:02
University Space to try the
28:04
if you have any
28:06
questions or comments
28:08
for the Inside Science
28:10
if you do contact
28:12
us by email or comments
28:14
bbc .co .uk. Until
28:16
next time, thanks for
28:18
listening. on Inside Science at bbc.co.uk.uk.
28:21
Until next time, thanks for listening.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More