The range of instruments used on the da Vinci robot

©2013 Intuitive Surgical, Inc


June 2013

The future of robotics in surgery will involve and increasingly powerful virtual environment, where surgeons are able
to see through the body and potentially work side by side with autonomous robotic assistants.

Rows of gruesome medical artifacts inhabit the shelves of the Hunterian
Museum in London. From human skulls to terrifying surgical devices, the horrors and triumphs of hundreds of years of medical
advancement sit silently on shelf after shelf over two floors.

Upstairs, nestled beside the sharp blades and heavy knives once
used in surgical amputations, sits a rusty and ferocious-looking
automatic circular saw. It would have been used for churning
through human bone in the days before the widespread use of

A prototype device invented in 1850 by William Winchester, the
saw is more than a gruesome medical curiosity. It is an early
example of the ways in which machines have been used to augment the
natural human ability of surgeons.

Transferring the exertion of sawing from the surgeon to the
machine, the saw was perilously uncontrollable. Around 15cm in
diameter, with a large wooden handle, the saw had no kill switch or
brake; no way of safely stopping it once it had been wound up and
set into motion.83

As the museum caption dryly notes, this simple machine, this
primordial robot, “unsurprisingly didn’t catch on”.

Today, our tools for augmenting a surgeon’s natural ability are
lightyears away from Winchester’s dumb saw. Surgical robots in the
21st century allow humans to operate on one another with superhuman
precision. The elimination of jitters and shaking give every
surgeon a rock steady hand, while even the most minute incisions
are made possible by the robot’s careful movements.

“[Robot-assisted surgery] is more controlled, more precise,
better able to dissect tissue, better able to control bleeding and
preserve important structures,” says Ben Challacombe, consultant
urologist at Guy’s and St Thomas’ Hospital in London. He has been
using surgical robots since 2006, and has carried out almost 500
robot-assisted operations.

“Augmented reality gives us the opportunity to
superimpose information such as, here are vessels you don’t want to

Catherine Mohr, Intuitive Surgical


Fluorescence imaging in the kidney using the da Vinci robot’s “Firefly” feature

©2013 Intuitive Surgical, Inc

But more than simply augmenting our physical capabilities, the
future of robotics in surgery will involve an increasingly powerful
virtual environment, where surgeons are able to see through the
body, control robots directly with their mind and potentially work
side by side with autonomous robotic assistants.

Virtual reality surgery
The human torso, or the thoracic cavity in medical speak, is a
messy place. Your intestines compete for space with your stomach,
liver and kidneys in a swamp of organs and tissue.

Whereas once standard surgical procedure would be to cut open
your body, opening the door to this messy and cluttered broom
closet, keyhole surgery is now widespread. Surgeons operate through
tiny incisions in the body, reducing scarification and healing

A generation of robots, like the da Vinci surgical robot, has
given surgeons the dexterity and mobility to operate in the
confined and claustrophobic interior of the body. But combined with
advances in imaging, the technology allows surgeons to see through
the body like never before.

“Augmented reality gives us the opportunity to superimpose
information such as, here are vessels or here are nerves that you
don’t want to disturb, over the view of the actual tissue,” says
Catherine Mohr, Director of Medical Research at Intuitive Surgical,
the company that makes the da Vinci surgical robot.

Not only can MRI and CAT scans be overlaid onto the surgeon’s
video feed, providing an information-rich 3D virtual environment
that allows the surgeon to see structures that would otherwise be
invisible, but specific areas can be “lit up”, allowing them to be
targeted easily by the surgeon.

“[Take] for instance a kidney with a tumour,” says Challacombe.
“You could have the tumour in green and you could portray where,
beneath the surface, the tumour was in the kidney.”

The technique he’s referring to is fluorescence imaging. Before
the procedure, the tumour is injected with a fluorescent dye, which
then shows up under certain wavelengths of light, like
near-infrared or UV, depending on the dye.

It’s analogous to a soldier using infrared goggles at night to
spot the enemy. It lights up tumours, making them stand out against
the background morass of flesh and connective tissue.

But more than simply a beacon, this technique can be used to
track the movement of cancer to the lymph nodes, allowing the
surgeon to stem the spread of the cancer before it becomes more

The net effect is to create an augmented reality environment,
where the surgeon is able to see so more than is simply visible on
the surface. Indeed, at Technische Universität München in Germany
are using a virtual reality surgical headset to overlay live
anatomical data over the surgeon’s field of view.

One example is the introduction of visual constraints, which
sound comically similar to the board game, Operation. “We’re can
put a 2mm safety margin around [the tumour],” says Challacombe, “so
the screen is going to go red or green if you go into that

“[It’s almost like] an idiot-proof, surgeon-proof safety guide,”
says Challacombe — whether it’s reassuring to know that an
operation is “surgeon-proof” is another matter entirely.

But while robots are helping surgeons to see and do more than
would be humanly possible, the question of what we allow the robots
to do themselves is a is an unanswered one, both technically and


Smart, autonomous knives
The same messiness and non-linear nature of soft tissue in the body that makes imaging so useful, also makes robotic autonomy exceedingly difficult.

“The problems that are going to be encountered are really considerable,” says Mohr. “Being able to do automated work in as deformable and variable and non-heterogeneous tissue as humans have [is very difficult]”.

In other words, robots are best suited for repetitive, straightforward work that doesn’t change from case to case — which is as far removed from human surgery as can be imagined. Not only is every person different, but also the human body is full of things that are ready to start bleeding at a moments notice.

One of the earliest automated surgical robots was the “Probot“, a robot developed to reduce the size of the prostate in older men.


It would be programmed to cut within a certain area, and would
chop away within that defined area until it was told to stop; a
medical Roomba, as it were.

“The surgeon could literally go to the coffee room and come back
ten or 20 minutes later,” says Challacombe. “It was an amazing
robot […] but very dangerous.”

The fundamental challenge with automated robotic surgery is
decision-making. How do you create an algorithm that not only
recognises the structures inside the body, but can also make
decisions based on unexpected events. A robot not only has to spot
a vein, but also know to avoid it, and also to detect if it starts
bleeding. After all that it has to make the decision to stop that

Far be for humans to step away from a challenge, however. At the
end of 2012, a team of leading roboticists at the University of
California Berkeley, and other institutions, were awarded a four-year grant to develop the foundations for
autonomous robotic surgeons.

“You have a human, which is pretty good in terms of
decision-making, and learning. You have a robot, which is good at
doing precise movements. Why not use a combination of

Guang-Zhong Yang, Hamlyn Centre for Robotic

In some surgeries, a degree of automation is already possible.
Certain eye surgeries, for example, involve the use of a
pre-programmed robot that performs precise incisions on the cornea.
Orthopaedic procedures that require the milling away of bone use
automated robots. It’s in these types of surgeries, where the body
is straightforwardly defined, and the outcomes and processes can be
clearly measured, that robotic automation is most easily

But some in the field object to idea that autonomy is positive
end in itself.

“I personally don’t think that fully autonomous surgical robots
is the right approach to take,” says Guang-Zhong Yang, Director of
the Hamlyn
Centre for Robotic Surgery at Imperial College London.

“You have a human, which is pretty good in terms of
decision-making, and learning. You have a robot, which is good at
doing precise movements,” says Yang, who argues that one or the
other acting independently is an inferior solution. “why not use a
combination of both?”

One of the ways to combine humans and robots is an experimental
technique called ” perceptual docking“, where the eye movements of the surgeon are
tracked in order to teach the robot the cognitive processes and
decision-making paths involved in surgery. Developed at Imperial
College London, the idea is the first step on the road to more
invasive and direct neural interfacing between the robot and the

Surgical robots today are, in essence, very advanced tools. With
no decision-making functions of their own, they merely channel the
surgeon’s actions. In Yang’s words, they are “complicated

But after the technical obstacles have been surmounted, and
these complicated scalpels become autonomous knives, new moral
questions will arise, similar to those posed by the use of robots in warfare.

“There’s a debate at the moment about autonomous robots in
warfare,” says Richard Ashcroft, Professor of Bioethics at Queen
Mary University. “[But] put crudely, warfare is much less difficult
than surgery.”

The stakes are equally as high, of course. Both involve a
responsibility for the lives, and deaths, of human beings. But
while we might be comfortable with the use of autonomous robots in
far off battlefields — out of sight, out of mind — convincing
family members to let a machine operate on their loved ones is
perhaps more challenging.

Part of the problem is responsibility. A human surgeon is
responsible for his or her actions, but for a robot guided by a
cloud database of medical information and complicated learning
algorithms, it’s less clear where responsibility lies.

“Where the surgeon makes a mistake, you can point to the man or
the woman and say that it was you,” says Ashcroft. “Where a robot
makes a mistake, one of the difficulties for patients is that the
location of responsibility becomes much more diffuse.”

We’re already beginning to see that argument play out in the US,
where Intuitive Surgical is being sued in a series of medical
negligence cases regarding the training of surgeons in the use of
the da Vinci robot. The first of at least 26 lawsuits concerning
the company was recently dismissed, but the litigation is a taste of what
might be expected when we begin to invest robots with
decision-making capabilities.

For the wealthy alone?
Despite the technological advances, surgical robots remain the
reserve of wealthy hospitals in wealthy countries. Of the 2,500 da
Vinci robots sold to date, almost 2,000 are in the United
States. The robot costs up to $2 million (£1.3 million), not to mention
the yearly running costs, the cost of the associated instruments —
batteries are not included, so to speak — and the cost of training
surgeons to use the complicated equipment.

There are currently 31 da Vinci robots in the UK, all of them in
England. Chelsea
and Westminster Hospital recently purchased one for use in
paediatrics. It’s the first da Vinci robot in the UK intended
for sole use on children.

“Surgical robotics today is very similar to digital
computers in the late 70s and early 80s”

Yang, Hamlyn Centre for Robotic Surgery

The Children’s Hospital Trust Fund, a charity based at the
hospital, spent four years raising the funds to purchase the robot.
Munther Haddad, a consultant paediatric surgeon and chair of the
Fund, says that resource constraints in the NHS inevitably make big
equipment purchases difficult.

And despite making the purchase, the Fund still needs to raise
another £500,000 to cover equipment and running costs. The great
potential of surgical robotics comes at a steep price.

“Surgical robotics today is very similar to digital computers in
the late 70s and early 80s,” says Guang-Zhong Yang.
“Certainly, the machines have come a long way, but they are big,
bulky and they’re expensive.”

But as with computers, they will inevitably become smaller and
more powerful in terms of their mechanical, sensing, and imaging

Crucially, says Yang, increased affordability will democratise
the technology. Not just the elite few will have access to it, but
instead it will be used routinely in all manner of surgical

And we’ll one day look at today’s robots with the same base
curiosity with which we view the tools in the Hunterian Museum,
wondering how we ever let surgeons do operations with their bare


No comments

Be the first one to leave a comment.

Post a Comment