New Scientist - USA (2022-06-04)

(Maropa) #1

14 | New Scientist | 4 June 2022


News


“A combination of a coated
drug and radiotherapy
eliminated tumours
in 67 per cent of mice”

Health

COATING a cancer drug with
nanoparticles can help it cross the
barrier that protects the brain from
chemicals elsewhere in the body,
enabling it to unleash an immune
response that kills brain tumour
cells. In a small study in mice, the
encapsulated drug – used with
radiotherapy – eliminated tumours
in two-thirds of the animals.
Glioblastoma is an aggressive
type of tumour that forms when

brain cells called glial cells grow
out of control. These tumours
are usually treated with surgery,
followed by radiotherapy and
sometimes chemotherapy.
However, while some drugs have
shown promise at boosting the
body’s immune response against
brain tumours, the drugs often
struggle to cross the blood-brain
barrier to reach the tumour cells.
Maria Castro at the University
of Michigan Medical School and
her colleagues have now tested
whether encapsulating a drug in
synthetic nanoparticles – made with
proteins found in the body – could

help it reach the brains of mice.
The team injected mice
with either a drug alone or a
nanoparticle-coated version. Eight
hours later, mice that received the
coated treatment had three times
more of the drug in their brains
than mice given a normal dose.
Castro’s team then tested how
well the drug – with and without
the nanoparticle coating – worked
in mice that had tumour cells

implanted into their brains.
A combination of the
nanoparticle-coated drug and
radiotherapy eliminated tumours
in 67 per cent of mice, enabling
them to survive for 90 days, on
average, from the moment tumour
cells were implanted. Mice that
received either radiotherapy or
the encapsulated drug alone
died after about 60 days (ACS
Nanotechnology, doi.org/gp7ndb).
The treatment still has to be tested
to see if it works in other animal
models of glioblastoma, before
it could be tested in humans.  ❚

Immunotherapy for
brain cancer can be
made more effective

Carissa Wong

Analysis Artificial intelligence

Will text-to-speech AIs put illustrators out of a job?
Google’s Imagen AI produces high-quality images from a text
prompt, but is it ready to replace humans, asks Alex Wilkins

TECH firms are racing to create
artificial intelligence algorithms
that can produce high-quality
images from text prompts, with the
technology seeming to advance so
quickly that some predict that human
illustrators and stock photographers
will soon be out of a job. In reality,
limitations with these systems mean
it will probably be a while before they
are available to the general public.
Text-to-image generators that use
neural networks have made great
progress in recent years. The latest,
Imagen from Google, comes hot on
the heels of DALL-E 2, which was
announced by OpenAI in April.
Both models use a neural
network trained on a large number of
examples to categorise how images
relate to text descriptions. When
given a new description, the network
repeatedly generates images, altering
them until they most closely match
the text based on what it has learned.
While the images presented by
both firms are impressive, researchers
have questioned whether the results
are being cherry-picked to show
the systems in the most favourable
light, a common practice. “You need
to present your best results,” says

Hossein Malekmohamadi at
De Montfort University in the UK.
One problem in judging these
AI creations is that both firms have
declined to release public demos
that would allow researchers and
others to put them through their
paces. Part of the reason for this is
a fear that the AI could be used to
create misleading images, or simply
that it could generate harmful results.
The models rely on data sets

scraped from large, unmoderated
portions of the internet, such as
the LAION-400M data set, which
Google has said is known to contain
“pornographic imagery, racist slurs,
and harmful social stereotypes”.
The researchers behind Imagen
say that as they can’t guarantee it
won’t inherit problematic content,
they can’t release it to the public.
OpenAI claims to be improving
DALL-E 2’s “safety system” by
“refining the text filters and tuning
the automated detection & response
system for content policy violations”,
while Google is seeking to address

the challenges by developing a
“vocabulary of potential harms”.
Neither firm was able to speak to
New Scientist before publication.
Unless these problems can be
solved, it seems unlikely that big
research teams like Google or
OpenAI will offer their text-to-image
systems for general use. It is possible
that smaller teams could choose to
release similar technology, but the
sheer amount of computing power
required to train these models on
huge data sets tends to limit work
on them to big players.
Despite this, the friendly
competition between the big firms
is likely to mean the technology
continues to advance rapidly,
as tools developed by one group
can be incorporated into another’s
future model. For example, diffusion
models, where neural networks learn
how to reverse the process of adding
random pixels to an image in an
effort to improve them, have shown
promise in machine-learning models
in the past year. Both DALL-E 2 and
Imagen rely on the technique, after
it proved effective in less-powerful
models, such as OpenAI’s Glide
image generator. ❚

IMA

GE
N/G

OO

GL
E

Six examples of images
generated by Google’s
Imagen AI
Free download pdf