Information

Are modern gene-editing techniques capable of creating genetically-superior versions of humans?

Are modern gene-editing techniques capable of creating genetically-superior versions of humans?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Can we alter the DNA in, say, a small-framed, low muscle mass male to those of elite bodybuilders?

Can we alter the DNA sequence that stops balding and hair loss? How about the genes responsible for bone metabolism, hormonal profiles, and the entire endocrine/hard tissue foundation of humans?

Can we alter the genes responsible for looks and alter them so they make an ugly personal hotter?

Basically, summarize this to: "Can gene editing create genetically-advanced versions of ourselves?"

Uglier prettier? Shorter taller? Weaker to stronger? Slower to faster? Dumber to smarter?


No, this is still science fiction. There have some of experimental efforts to correct very simple, monogenic, diseases. Some of experiments seem to have helped the patient, many were ineffective, and some killed the patient or gave them cancer. Gene therapy in humans in still in the very early experimental stage. On top of that, the traits you are talking about are almost certainly governed by multiple genes, and the genetic networks are "incompletely understood", to put it mildly.


Frankenstein redux: Is modern science making a monster?

Could current experiments in science and technology lead to the creation of a modern-day Frankenstein&rsquos monster?

A towering gargantuan beast with yellowed skin, shrivelled lips and sunken eyes, hiding in the shadows, waiting to squeeze the life out of any who cross its path - this is the creation of a man who played with science. It&rsquos been 200 years since Mary Shelley planted the seeds that became the novel Frankenstein, and her ominous warning sounds louder than ever: do not interfere with what you do not understand.

We have come a long way from the idea of stitching together decomposing body parts and somehow &lsquozapping&rsquo them to life. Living things are no longer thought to be animated by &lsquoanimal energy&rsquo created by the soul. Yet scientists still attempt to recreate life in different forms, and several fields find themselves tarred with the &lsquoFrankenstein&rsquo brush.

None more so than synthetic biology. In 1999, world-renowned synthetic biologist Craig Venter set the ball in motion by exclaiming &ldquoShelley would have loved this!&rdquo when he announced plans to create the first synthetic biological genome. Later, in 2009, academic philosopher Henk van den Belt, in a paper published in Nanoethics, questioned whether synthetic biology could be accused of &lsquoPlaying God in Frankenstein&rsquos Footsteps&rsquo by attempting to manipulate life.

Synthetic biology may seem the most obvious example, but as far back at the1950s, another major experimental field has also been criticised for trying to play god - artificial intelligence (AI). In 1950, Isaac Asimov coined the infamous &lsquoFrankenstein complex&rsquo in his novel &lsquoI, Robot&rsquo, with one of the first predictions of a robotic Frankenstein&rsquos monster. Several other authors and mainstream movie directors have since followed suit.

So why are we are still obsessed with Frankenstein? Because even with the strictest ethics and highest standard of care, things can, and do, go wrong. With advances in science and technology announced every day, the potential for disaster seems closer than ever.

How worried should we be about the scientific fields that attempt to create new life? Is there any real chance that someone could create a modern-day version of the Creature that so plagued Victor Frankenstein&rsquos existence? Let&rsquos pause to consider what could happen if things went awry.

Synthetic Biology

This field has probably received the brunt of Frankenstein comparisons over the years. There have been countless stories in the media of modern-day Frankenstein experiments, and scientists attempting to artificially create and manipulate living organisms. Just how justified is this comparison, and should we actually be afraid?

On the whole, synthetic biology is about engineering natural science to make it better, or more useful. In fact, many practitioners in this field are not life scientists by training, but engineers who have crossed over into the area.

Contrary to what the media may have you believe, says Richard Hammond, head of synthetic biology at Cambridge Consultants, the synbio community are not mad scientists in labs. Rather, &ldquothe intent of most people working in the field is to improve things in some way,&rdquo he comments. &ldquoThere are very real and difficult problems in the world that people are trying to solve.&rdquo

To do this, synthetic biologists take natural molecules and reassemble them to create systems that act unnaturally. Manipulating organisms in this way can be put to a whole host of uses, from diagnostics to creating micro-factories, in the form of &lsquoreprogrammed&rsquo cells that produce drugs and other chemicals. In the past, synthetic biologists have produced diagnostic tools for diseases such as HIV and hepatitis viruses.

Of course, there are risks with using technology of this kind, but these are no different from those posed by any other type of scientific research. Sometimes, experiments do not go as planned, but provided they are carried out under controlled circumstances, this shouldn&rsquot be a problem.

&ldquoThe issue is that fear dominates the conversation&rdquo says Rob Carlson, director of Bioeconomy Capital. &ldquoThere are many countries in the world in which scary stories about synthetic biology or about genetically modified organisms completely overshadow any fact that might be available.&rdquo

One concern that scientists have is the potential mismatch between the knowledge of how biological systems really work and the ability to make changes to them. The gene-editing techniques that science has are almost all derived from nature, but small changes to the biochemistry have produced increasingly powerful tools.

Discovered in 2012, the Crispr-Cas9 system allows precision editing of all kinds of cells - bacteria, plant and animal - with little risk of the edits turning up in the wrong part of the genome. It&rsquos proved successful in lab studies at addressing hereditary diseases and conditions by editing out genetic mutations in genomes. This has included lab studies in which mice genomes were successfully edited to correct mutations that cause the metabolic disorder hereditary tyrosinemia in humans.

However, with this increased confidence over editing prowess comes risk, says Seth Goldstein, associate professor in computer science at Carnegie Mellon University, who is particularly concerned by recent reports that scientists in China have used Crispr-Cas9 to edit non-viable human embryos to make them resistant to HIV infection.

&ldquoThe news that the Chinese have recently used Crispr-Cas9 to modify a human embryo is one of the scariest things I have read in the recent past,&rdquo he says. &ldquoWe think we know more than we do and we are going to start selecting for our children based on things we don&rsquot really understand.&rdquo

The combined result of all these concerns is that research into synthetic biology is subject to huge constraints in terms of restrictions and regulations, which not only prevent dangerous materials from entering the environment, but also stop potentially lifesaving applications from making it into the field.

In 2012, researchers from Cambridge University announced that they had developed biosensors for use in detecting arsenic in groundwater, a blessing for countries such as Bangladesh, where contaminated water causes serious problems for the population. The biosensor, developed by Dr Jim Ajioka and Dr Jim Haseloff, is cheap, non-toxic and easy to use, but the project has stalled because the sensor is not approved in Europe.

&ldquoThe European Commission are essentially blocking its introduction because they don&rsquot have any proper mechanisms for dealing with this kind of innovation,&rsquo says Richard Kitney, professor of biomedical systems engineering at Imperial College London.

Projects like this don&rsquot only come up against governmental blocks. &ldquoA lot of NGOs out there take the attitude of &lsquothis could be dangerous so we'd better not do it&rsquo&rdquo says Kitney. &ldquoWhat they don&rsquot do is to take into account the risk of doing nothing. There are literally thousands of people in Bangladesh that are dying, or seriously disfigured, by drinking arsenic in groundwater.&rdquo

Programmable matter

For the Symbrion research project, researchers from the Bristol Robotics Laboratory and other groups designed cubic robots that could move and act individually but, when programmed to, would work together, even combining into a larger, more capable robot. The idea is the first step towards &lsquoprogrammable matter&rsquo.

Alan Winfield, researcher in cognitive robotics at the lab says the 10cm robots were &ldquoabsurdly large&rdquo but they demonstrate what could be achieved with mechanical systems that cooperate with each other. &ldquoIf you were to imagine shrinking those robots to things a fraction of a size of a sugar cube, and if you had hundreds of those, then you are getting something approaching what you could call programmable matter,&rdquo he says.

There are many potential applications of such technology. In disaster situations, swarms of microscopic robots could be sent into collapsed buildings to tend to injured survivors. Shrunk further, swarms could be used to perform medical procedures inside the human body, entering through a keyhole-sized incision.

Miniaturisation, though, is the problem. &ldquoIf you want to have robots that are literally the size of a grain of sand they certainly can&rsquot be made right now, and they probably can&rsquot be made even in the foreseeable future,&rdquo says Winfield.

Programmable matter in fiction provides a potential monster: take the morphing T-1000 robot from the Terminator movies or Michael Crichton&rsquos sentient and genocidal nanobot &lsquoswarms&rsquo in his 2002 novel &lsquoPrey&rsquo. But, Winfield believes this kind of risk is largely hypothetical. It would be straightforward to introduce a kill switch to cause the individual robots to separate and go dormant, he argues. That assumes the assembled robot has a will of its own.

&ldquoTaking a whole bunch of cells and just gelling them together doesn&rsquot make an intelligent thing,&rdquo he says. &ldquoProgrammable matter is more akin to a sponge than an autonomous intelligent machine.&rdquo

If the kill switch fails, simply starving the creature would almost certainly disarm it.

&ldquoMost robots, including the ones in our projects, have a battery, which means that they have a fixed lifetime,&rdquo says Winfield. &ldquoOnce the energy runs out then that&rsquos it, the robot just stops working.&rdquo
In any case, Winfield argues, the situation would arise only if someone were to design a system of self-assembling robots that could replicate. That would be adding even more complexity to something that is already hard to miniaturise.

The nearest we are likely to get in even the medium-term is a robot probe sent to Mars or some other planet alongside a 3D printer and feedstock that would allow it to make repairs to itself in the field. This is the essence of John von Neumann&rsquos concept of interstellar exploration: each robot making others to venture further. The ability to make a group of machines self-sufficient remains well into the distant future. The key will be to ensure that they do not replicate themselves into a threat.

Artificial Intelligence

Elon Musk is the backer of several ambitious technological ventures - not just electric cars, but spacecraft and a supersonic travel tube. However he is worried about the prospect of a technological monster: AI. He wrote on Twitter in 2014: &ldquoHope we&rsquore not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.&rdquo

Stephen Hawking has similar fears. In an interview with the BBC in 2014, he said that developing AI fully &ldquocould spell the end of the human race&rdquo. Although the primitive forms of AI developed so far have already proved very useful, Hawking says he fears the consequences of creating something that can match or surpass humans. None of the examples we have today come even close to being able to do this.

There are many examples of attempts to to create AI, from Google&rsquos Go-playing AlphaGo computer program to Microsoft&rsquos teenage Twitter chatbot Tay, but these don&rsquot really live up to true &lsquointelligence&rsquo of the kind that Hawking and Musk are concerned about. AlphaGo has restricted capabilities outside of playing Go, and Tay&rsquos skills are limited to regurgitating other Twitter users&rsquo tweets.

But what could happen if a machine were to become truly intelligent, possibly even sentient?

One concern is that such a machine could bring about a &lsquotechnological singularity&rsquo - a hypothetical event in which AI becomes capable of autonomously building ever smarter and more powerful machines. It sounds scary, but many people working in robotics argue that real AI, the likes of which could bring about such a situation, is still a long way off, if indeed it can ever be realised.

Professor Robert Sparrow&rsquos research interests at Monash University include applied ethics. He points out that most of the arguments in favour of achieving machine sentience have to do with comparing neurons in the brain with transistors on chips. The reality, he says, is far more complicated.

&ldquoIf you look at what makes human beings tick when we are trying to repair them, when someone comes to a psychiatrist when they are mentally ill, we are completely clueless about how the brain works, and our treatments are laughably primitive,&rdquo he says. &ldquoIf someone said to me that they are going to build a sentient robot in the next 20 years, I would be very surprised.&rdquo

Noel Sharkey, a computer scientist and co-director of the Foundation for Responsible Robotics, is also sceptical. &ldquoAs a scientist I could never say never,&rdquo he says when pondering the question of whether we will ever achieve machine sentience. &ldquoI just don&rsquot think that we have any handle on sentience. It seems to be quite a different thing from a program running on a non-living machine.&rdquo

However, Sparrow admits that the possibility of a sentient robot is very concerning.

&ldquoI think that potentially it is immensely dangerous,&rdquo he says. &ldquoSome of the people who believe that we are on the verge of creating machine consciousness themselves believe that this will make human beings obsolete, that we will quickly be outthought by our machines.&rdquo

Being forced into submission by a race of superior beings is an unsettling prospect and strikingly similar to Victor Frankenstein&rsquos own fears. In the novel, Victor refuses to make a mate for his Creature for fear that their children might supersede the human race, describing the children as &ldquoa race of devils &hellip who might make the very existence of the species of man a condition precarious and full of terror.&rdquo

&ldquoIf you are looking for a contemporary Frankenstein it is artificial intelligence,&rdquo says Sparrow. &ldquoThat is where people think that one day there is a chance that we will make something that will look back at us, or maybe even decide to wipe us all out.&rdquo

In the past, Elon Musk called for regulatory oversight into research into AI to make sure that no one does anything &ldquovery foolish&rdquo. This is exactly what Noel Sharkey and others from the Foundation for Responsible Robotics are attempting to implement, starting at a much simpler level than the search for true AI. &ldquoWe must be very wary of the control that we cede to machines and always ensure human oversight,&rdquo says Sharkey.

&ldquoIt would not take super-intelligent machines to take over the world. The natural stupidity of humans could grant too much control to dumb machines,&rdquo he says. &ldquoBut I believe in humanity and our ability to stay in charge, providing that we begin to put good policies in place now by bringing all of the stakeholders together to discuss the common good.&rdquo

Shelley&rsquos warning

In the most recent movie adaptation of Mary Shelley&rsquos novel, Victor Frankenstein&rsquos assistant Igor attempts to reassure a scared female acquaintance with the words: &ldquoEvery day, science and technology changes the way we live our lives.&rdquo Yet he avoids declaring how the changes might affect humanity.

At the time when Shelley wrote Frankenstein, the Enlightenment - a time of rapid advances in science and technology - was drawing to a close. Her novel pointed to the problem of seeing every advance as inevitably a power for good. Victor only realises too late what he was trying to achieve: &ldquoI had desired it with an ardour that far exceeded moderation but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.&rdquo

AI, synthetic biology and programmable matter all have the potential to change our lives, but they may contain the essence of monsters nobody wanted to create. The difference from the novel is that people are thinking of how it might go wrong - and how to prevent it. Just watch out for those who claim the results will naturally be beautiful.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.


GM 2.0? 'Gene-editing' produces GMOs that must be regulated as GMOs

There has been a lot in the news recently about the ethics of gene editing in humans.

But, as yet largely unnoticed is that the European Commission is considering whether the gene-editing of plants and animals, for example in agriculture, be exempted from regulation or even falls outside the scope of EU law governing genetically modified organisms (GMOs).

In other words, whether the products of gene-editing should be labelled and regulated as GMOs, or allowed to enter the food chain untested and unlabelled.

If you believe the proponents' claims, gene-editing is nothing more than the 'tweaking' of DNA in plants and animals - nothing to be concerned about.

But the reality is that gene editing is simply GM 2.0, with many of the same concerns and problems as the GM crops that Europeans have already rejected.

What is gene-editing?

Gene-editing is a form of genetic engineering. It covers a range of new laboratory techniques that, just as older genetic engineering techniques, can change the genetic material (usually DNA) of a living organism, for example a plant or an animal, without breeding.

In many respects, they are similar to the 'traditional' genetic engineering we are familiar with. The difference is that these techniques can change the DNA of the plant or animal at a specific 'targeted' location, compared to the insertion of genes at random locations characteristic of previous techniques.

Many of these techniques can be used to insert genes from an unrelated species into a plant or animal as traditional genetic engineering does and the resulting products, with their novel genes would be regarded as GMOs. But not all the applications of gene-editing involve the insertion of novel genes.

The current debate surrounds applications of gene-editing that, instead of inserting genes, re-write genes using a sort of 'DNA typewriter'. The question is whether plants and animals with 'edited' genes (without inserted novel genes) should be regulated as GMOs.

Products of gene editing with re-written genes that might be imported, grown or farmed in Europe in the near future, including the UK, include a herbicide-tolerant oil seed rape, produced by a technique known as oligonucleotide directed mutagenesis (ODM), and hornless cattle, developed through a technique known as 'CRISPR'.

CRISPR is becoming well known in scientific circles as it's a particularly efficient method of gene-editing.

The risks of gene-editing?

With current commercial GM crops, one of the major concerns is that unexpected effects can result, and have resulted, from the genetic engineering process, and these can affect food and environmental safety. These effects can include, altered levels of toxins or nutritional compounds and changes to the protein chemistry, which could produce new allergens.

That is why the EU has set up regulations for GM organisms, requiring them to undergo an environmental and health risk assessment before they are grown or reared commercially or enter the food chain. Even so, doubts linger as to the effectiveness of these assessments.

'Traditional' genetic engineering involves the random insertion of genes (or genetic sequences) into an organism's genome. Proponents tell us that gene-editing is far more precise than the genetic engineering techniques we are familiar with. But what exactly is meant by 'precise' here?

Gene-editing techniques may perhaps be more precise at the level and point where the DNA is altered but how this altered DNA might affect interactions with other genes and processes within the cell is largely unknown. Importantly, these gene-to-gene interactions within the cell are reflected in the organism as a whole.

The effects of the altered DNA on the wholesomeness as a foodstuff and how the organism interacts with the environment are far from being precisely known. Therefore, although gene-editing may be more precise in the intended location where the DNA is modified, there is still potential for unexpected and unpredictable effects.

Such effects could have implications for food, feed and environmental safety if they increase levels of toxic compounds, reduce levels of nutritional compounds or even produce new allergens.

'Off-target' genetic alterations

Just like traditional genetic engineering, gene-editing techniques can cause unintended alterations in the DNA. For example, several gene-editing techniques use so-called 'molecular scissors' to cut DNA as part of the editing process.

These molecular scissors sometimes have what is known as 'off-target' effects. This means the DNA is cut in unintended places as well as the intended places, accidentally causing additional genetic alterations.

Other gene editing techniques such as ODM could also edit DNA in the wrong place. In addition, the newly edited gene could interact with other genes in different ways, affecting protein composition and production, chemistry and metabolism.

Many of the gene-editing techniques are so new that it is not yet possible to fully evaluate the potential for and consequences of unintended changes. Importantly, just because gene-edited organisms don't contain foreign DNA, this doesn't make them safe.

Furthermore, there is increasing evidence of 'off-target' effects. The intended change (e.g. tolerance to a herbicide or cattle without horns) may be clear to see, but the unintended changes aren't immediately apparent, and certainly not apparent if they aren't looked for. It's a case of 'don't look, won't find'.

The law is clear: gene-editing is still genetic engineering

The question being debated in the EU at the moment is whether small 'edits', i.e. changes, insertions or deletions, of segments of DNA without the insertion of new genes are also to be considered as producing a GMO, or fall outside the scope of European law.

At the core of this debate is the question of what is the distinction between conventional breeding that involves mating and GMOs. In both the EU law (Directive 2001/18) (See Article 2(2) and Annexes, below) and the UN agreement on GMOs - the Cartagena Protocol, made under the Convention on Biological Diversity - GMOs involve novel arrangements of genetic material that do not occur naturally, and alterations to genetic material being made directly without mating.

The Directive contains annexes which define exactly what techniques of genetic alteration do, and do not, fall under the definition (reproduced in full below). However gene-editing simply is not mentioned: the technology did not exist in 2001 when the law was written. That means we have to rely on the initial definition:

"'Genetically modified organism (GMO)' means an organism, with the exception of human beings, in which the genetic material has been altered in a way that does not occur naturally by mating and/or natural recombination"

Likewise, the Cartagena Protocol, adopted in 2000, does not specifically list gene-editing as a technology included in its definition (full version below). However the technology, once again, is encompassed by the simple meaning of the words:

"'Living modified organism' means any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology . 'Modern biotechnology' means the application of: a. In vitro nucleic acid techniques, including . "

In terms of the Directive, it is accurate to say that in gene-edited organisms "the genetic material has been altered in a way that does not occur naturally by mating and/or natural recombination".

In terms of the Cartagena Protocol, in is accurate to say that a gene-edited organism "possesses a novel combination of genetic material obtained through the . application of . In vitro nucleic acid techniques".

So in fact - despite the abstruse legal arguments deployed by GM advocates - the law is perfectly clear on the issue. According to the both the EU and Cartagena definitions, gene-editing produces GMOs.

Therefore to remove or exempt gene-editing from regulation, as GM advocates wish, the EU would need to amend the existing Directive. If it tried to interpret the Directive as GM advocates wish, the decision would surely by challenged in the European Court, for example by one of the many EU countries opposed to the use of GMOs in farming - where in our opinion it should struck down.

Does is matter if gene-editing is not classed as a GM technique?

If crops and animals developed by gene-editing techniques are officially considered non-GM, or exempted from the EU GMO laws, then they will enter the food chain and the environment completely unregulated and unlabelled.

This means there would be no assessment of food or environmental safety no requirement to detect any unintended alterations to the organisms' DNA or its consequences and no assessment of the implications of the trait produced by gene editing (e.g. herbicide tolerance).

Gene-edited foodstuffs would not have to be labelled. European consumers have resoundingly said "No!" to GM crops, yet there would be no way for consumers and farmers to avoid gene-edited crops and animals if they were not classified (and hence labelled) as GMOs.

Importantly, although gene-editing might be promoted as causing only small changes in DNA, it can be used repeatedly to achieve substantial changes to one or even several genes. This raises the concern that the alterations could involve the introduction of, for example, whole new chemical pathways within a plant or animal with a high potential for unexpected effects.

Such organisms would end up in our environment and on our dinner plates completely unregulated if gene-editing techniques are not encompassed by the GMO regulations.

The EU's GMO laws were devised to protect against the risk of organisms developed by the direct alteration of genetic material using modern biotechnologies (e.g. by in vitro techniques) entering the environment and food chain.

It's clear that gene-edited crops and animals need to be assessed as GMOs in the same ways as current GM crops. Otherwise EU citizens will unwittingly be exposed to the risks of genetic engineering without testing or labelling, as will the environment, biodiversity and agriculture.

Dr Janet Cotter runs an environmental consultancy, Logos Environmental. She was previously Senior Scientist with Greenpeace International for 15 years.

Dr Ricarda Steinbrecher is a biologist, geneticist and co-director of EcoNexus. She has worked on GMOs since 1995, especially UN-led processes on Biosafety, the risk assessment of genetically engineered organisms and synthetic biology. She's a founding member of the European Network of Scientists for Social and Environmental Responsibility and works with civil society and small-scale farmer groups world-wide.

Additional reporting by The Ecologist.

Further reading

Cartagena Protocol - use of terms

(g) "Living modified organism" means any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology

(h) "Living organism" means any biological entity capable of transferring or replicating genetic material, including sterile organisms, viruses and viroids

(i) "Modern biotechnology" means the application of:
a. In vitro nucleic acid techniques, including recombinant deoxyribonucleic acid (DNA) and direct injection of nucleic acid into cells or organelles, or
b. Fusion of cells beyond the taxonomic family, that overcome natural physiological reproductive or recombination barriers and that are not techniques used in traditional breeding and selection

Directive 2011/18, Article 2(2) & Annexes

"Genetically modified organism (GMO)" means an organism, with the exception of human beings, in which the genetic material has been altered in a way that does not occur naturally by mating and/or natural recombination

Within the terms of this definition:

(a) genetic modification occurs at least through the use of the techniques listed in Annex I A, part 1

(b) the techniques listed in Annex I A, part 2, are not considered to result in genetic modification

TECHNIQUES REFERRED TO IN ARTICLE 2(2)

Techniques of genetic modification referred to in Article 2(2)(a) are inter alia:

(1) recombinant nucleic acid techniques involving the formation of new combinations of genetic material by the insertion of nucleic acid molecules produced by whatever means outside an organism, into any virus, bacterial plasmid or other vector system and their incorporation into a host organism in which they do not naturally occur but in which they are capable of continued propagation

(2) techniques involving the direct introduction into an organism of heritable material prepared outside the organism including micro-injection, macro-injection and micro-encapsulation

(3) cell fusion (including protoplast fusion) or hybridisation techniques where live cells with new combinations of heritable genetic material are formed through the fusion of two or more cells by means of methods that do not occur naturally.

Techniques referred to in Article 2(2)(b) which are not considered to result in genetic modification, on condition that they do not involve the use of recombinant nucleic acid molecules or genetically modified organisms made by techniques/methods other than those excluded by Annex I B:

(2) natural processes such as: conjugation, transduction, transformation,

TECHNIQUES REFERRED TO IN ARTICLE 3

Techniques/methods of genetic modification yielding organisms to be excluded from the Directive, on the condition that they do not involve the use of recombinant nucleic acid molecules or genetically modified organisms other than those produced by one or more of the techniques/methods listed below are:

(2) cell fusion (including protoplast fusion) of plant cells of organisms which can exchange genetic material through traditional breeding methods.


Gene editing wiped out a population of mosquitoes in lab tests

The malaria-carrying mosquito Anopheles gambiae’s days may be numbered. Scientists have devised a gene drive that wiped out the mosquito’s populations in lab tests.

Share this:

October 26, 2018 at 8:28 am

Gene editing may push a species of malaria-carrying mosquito to extinction.

These new results come from a small-scale laboratory study. Researchers used a genetic engineering tool to make changes to species called Anopheles gambiae (Ah-NOF-eh-lees GAM-bee-aye). As a result, the mosquitoes stopped producing offspring in eight to 12 generations. The researchers reported this September 24 in Nature Biotechnology. If the finding holds up in larger studies, this tool could be the first capable of wiping out a disease-carrying mosquito species.

Educators and Parents, Sign Up for The Cheat Sheet

Weekly updates to help you use Science News for Students in the learning environment

Explainer: How CRISPR works

“This is a great day,” says James Bull. He’s an evolutionary biologist at the University of Texas at Austin. He was not involved in the study. “Here we are with a technology that could radically change public health for the whole world.” That’s because A. gambiae is the main mosquito spreading malaria in Africa. The disease kills more than 400,000 people each year worldwide, according to the World Health Organization. Many of those who die are children.

The researchers changed the mosquitoes’ genes with a gene drive. Gene drives use the molecular “scissors” known as CRISPR/Cas9 to copy and paste themselves into an organism’s DNA at precise locations. They’re designed to break the rules of inheritance. They can quickly spread a genetic tweak to all offspring.

The new gene drive breaks a mosquito gene called doublesex. Female mosquitoes that inherit two copies of the broken gene develop like males. They are unable to bite or lay eggs. Being unable to bite means they can’t spread the malaria parasite. Males and females that inherit only one copy of the disrupted gene develop normally and are fertile. Males don’t bite, whether they have the gene drive or not.

Changing genes

In each of two cages, researchers placed 300 female and 150 male normal A. gambiae mosquitoes. Then they added 150 males carrying the gene drive. In each generation, 95 percent to more than 99 percent of offspring inherited the gene drive. Normally, only 50 percent of offspring inherit a gene.

Within seven generations, all of the mosquitoes in one cage carried the gene drive. No eggs were produced in the next generation. The population died out. In the other cage, it took 11 generations for the gene drive to spread to all of the mosquitoes and crash the population. The insects in that cage made no offspring in generation 12.

Other gene-drive studies have done computer simulations to predict how long it would take for the drives to spread through a population. This is the first time the approach has succeeded in actual mosquitoes.

Other types of gene drives also have been passed to offspring at high rates. But in those experiments, DNA changes, or mutations, that destroy the cutting site for CRISPR/Cas9 popped up. That allowed the mosquitoes that carry the mutation to resist the drive.

A few mosquitoes in the new study also developed mutations. However, “no resistance was observed,” says study coauthor Andrea Crisanti. He’s a medical geneticist in England at Imperial College London. Those mutations broke the doublesex gene. Females with these broken genes were sterile and couldn’t pass the mutations on to the next generation.

All insects have some version of doublesex. “We believe that this gene may represent [a vulnerability] for developing new pest-control measures,” Crisanti says.

A. gambiae likes to bite people. That makes it good at spreading malaria from person to person. The gene drive now raises the prospect of deliberately causing the extinction of this species.

“If you have a technology that could eradicate that [mosquito], it would be unethical not to use it,” says Omar Akbari. He is a geneticist at the University of California, San Diego. He was not involved in the work. But Akbari thinks it is unlikely that the gene drive would work as well in the wild as it did in the lab. That’s because resistance is bound to pop up at some point.

No one knows what will happen to the environment if all the mosquitoes die, either. There could be problems for species that eat mosquitoes, for instance. Also unknown is whether the gene drive could be passed on to other species. What if a “James Bond–style villain” used a similar gene drive to attack honeybees or other beneficial insects, says Philipp Messer. He is a population geneticist at Cornell University in Ithaca, N.Y. “Humans will always come up with ways to abuse [technology]. And in this case, it’s just so easy. That’s what worries me.”

Power Words

biology The study of living things. The scientists who study them are known as biologists.

Cas9 An enzyme that geneticists are now using to help edit genes. It can cut through DNA, allowing it to fix broken genes, splice in new ones or disable certain genes. Cas9 is shepherded to the place it is supposed to make cuts by CRISPRs, a type of genetic guides. The Cas9 enzyme came from bacteria. When viruses invade a bacterium, this enzyme can chop up the germs DNA, making it harmless.

coauthor One of a group (two or more people) who together had prepared a written work, such as a book, report or research paper. Not all coauthors may have contributed equally.

CRISPR An abbreviation &mdash pronounced crisper &mdash for the term &ldquoclustered regularly interspaced short palindromic repeats.&rdquo These are pieces of RNA, an information-carrying molecule. They are copied from the genetic material of viruses that infect bacteria. When a bacterium encounters a virus that it was previously exposed to, it produces an RNA copy of the CRISPR that contains that virus&rsquo genetic information. The RNA then guides an enzyme, called Cas9, to cut up the virus and make it harmless. Scientists are now building their own versions of CRISPR RNAs. These lab-made RNAs guide the enzyme to cut specific genes in other organisms. Scientists use them, like a genetic scissors, to edit &mdash or alter &mdash specific genes so that they can then study how the gene works, repair damage to broken genes, insert new genes or disable harmful ones.

develop To emerge or come into being, either naturally or through human intervention, such as by manufacturing. (in biology) To grow as an organism from conception through adulthood, often undergoing changes in chemistry, size and sometimes even shape.

disrupt (n. disruption) To break apart something interrupt the normal operation of something or to throw the normal organization (or order) of something into disorder.

DNA (short for deoxyribonucleic acid) A long, double-stranded and spiral-shaped molecule inside most living cells that carries genetic instructions. It is built on a backbone of phosphorus, oxygen, and carbon atoms. In all living things, from plants and animals to microbes, these instructions tell cells which molecules to make.

ecology A branch of biology that deals with the relations of organisms to one another and to their physical surroundings. A scientist who works in this field is called an ecologist.

egg The unfertilized reproductive cell made by females.

engineering The field of research that uses math and science to solve practical problems.

eradicate To deliberately eliminate or wipe out, such as a population of vermin (rats or cockroaches, for instance) inhabiting a particular site.

evolutionary An adjective that refers to changes that occur within a species over time as it adapts to its environment. Such evolutionary changes usually reflect genetic variation and natural selection, which leave a new type of organism better suited for its environment than its ancestors. The newer type is not necessarily more &ldquoadvanced,&rdquo just better adapted to the conditions in which it developed.

evolutionary biologist Someone who studies the adaptive processes that have led to the diversity of life on Earth. These scientists can study many different subjects, including the microbiology and genetics of living organisms, how species change to adapt, and the fossil record (to assess how various ancient species are related to each other and to modern-day relatives).

extinction The permanent loss of a species, family or larger group of organisms.

fertile Old enough and able to reproduce.

gene (adj. genetic) A segment of DNA that codes, or holds instructions, for a cell&rsquos production of a protein. Offspring inherit genes from their parents. Genes influence how an organism looks and behaves.

gene drive A technique for introducing new bits of DNA into genes to change their function. Unlike other such genetic engineering techniques, gene drives are self-propagating. That means they make more of themselves, becoming part of every unaltered target gene they encounter. As a result, they get passed on to more than 50 percent of an altered animal&rsquos offspring, &ldquodriving&rdquo themselves quickly into populations.

gene editing The deliberate introduction of changes to genes by researchers.

generation A group of individuals (in any species) born at about the same time or that are regarded as a single group. Your parents belong to one generation of your family, for example, and your grandparents to another. Similarly, you and everyone within a few years of your age across the planet are referred to as belonging to a particular generation of humans. The term also is sometimes extended to year classes of other animals or to types of inanimate objects (such as electronics or automobiles).

genetic engineering The direct manipulation of an organism&rsquos genome. In this process, genes can be removed, disabled so that they no longer function, or added after being taken from other organisms. Genetic engineering can be used to create organisms that produce medicines, or crops that grow better under challenging conditions such as dry weather, hot temperatures or salty soils.

insect A type of arthropod that as an adult will have six segmented legs and three body parts: a head, thorax and abdomen. There are hundreds of thousands of insects, which include bees, beetles, flies and moths.

malaria A disease caused by a parasite that invades the red blood cells. The parasite is transmitted by mosquitoes, largely in tropical and subtropical regions.

mutation (v. mutate) Some change that occurs to a gene in an organism&rsquos DNA. Some mutations occur naturally. Others can be triggered by outside factors, such as pollution, radiation, medicines or something in the diet. A gene with this change is referred to as a mutant.

organism Any living thing, from elephants and plants to bacteria and other types of single-celled life.

parasite An organism that gets benefits from another species, called a host, but doesn&rsquot provide that host any benefits. Classic examples of parasites include ticks, fleas and tapeworms.

population (in biology) A group of individuals from the same species that lives in the same area.

resistance (as in drug resistance) The reduction in the effectiveness of a drug to cure a disease, usually a microbial infection. (as in disease resistance) The ability of an organism to fight off disease.

simulation (v. simulate) An analysis, often made using a computer, of some conditions, functions or appearance of a physical system. A computer program would do this by using mathematical operations that can describe the system and how it might change over time or in response to different anticipated situations.

species A group of similar organisms capable of producing offspring that can survive and reproduce.

sterile An adjective that means devoid of life &mdash or at least of germs. (in biology) An organism that is physically unable to reproduce.

technology The application of scientific knowledge for practical purposes, especially in industry &mdash or the devices, processes and systems that result from those efforts.

World Health Organization An agency of the United Nations, established in 1948, to promote health and to control communicable diseases. It is based in Geneva, Switzerland. The United Nations relies on the WHO for providing international leadership on global health matters. This organization also helps shape the research agenda for health issues and sets standards for pollutants and other things that could pose a risk to health. WHO also regularly reviews data to set policies for maintaining health and a healthy environment.

Citations

About Tina Hesman Saey

Tina Hesman Saey is the senior staff writer and reports on molecular biology. She has a Ph.D. in molecular genetics from Washington University in St. Louis and a master’s degree in science journalism from Boston University.

Classroom Resources for This Article Learn more

Free educator resources are available for this article. Register to access:


The Pros And Cons Of Genetically Engineering Humans

Today there is a lot of fear and anxiety around the prospect of genetically modify humans beings. Yet it is increasingly looking like this will become more commonplace in the coming decades.

Over the past year, I’ve had the opportunity to speak with members of the scientific and business communities working on genetic engineering. I’ve discussed with them how this technology could evolve and what some of the potential benefits and risks are to society. In this post, I’ll share what I’ve learned, in case others find it interesting.

Why should we care about genetic engineering?

It c o uld help eliminate hundreds of diseases. It could eliminate many forms of pain and anxiety. It could increase intelligence and longevity. It could change the scale of human happiness and productivity by many orders of magnitude. There are only a handful of areas of research in the world with this much potential.

Zooming out, genetic engineering could be viewed as a historical event on par with the cambrian explosion in how it changed the pace of evolution. When most people think of evolution they’re thinking about biological evolution through natural selection, but this is just one form. Over time, it will likely be superseded by other forms of evolution that act much more quickly. What are some of these? The candidates in my mind are (1) artificial intelligence, or synthetic life, breeding and mutating at a rapid rate (2) biological life, with genetic engineering being used to take a more directive approach, and (3) some merged hybrid of the two. Instead of waiting hundreds of thousands of years for beneficial mutations to show up (as with natural selection), we could start to see beneficial changes every year.

This all sounds pretty far fetched, I don’t think any of it will happen soon in the near future.

It’s important to disentangle whether we think something will happen from whether we think it should happen. Many people are uncomfortable with the idea of it happening, and this influences their prediction of how likely it is to happen.

Consider where we are today:

  • Humans have been genetically engineering organisms for thousands of years using selective breeding (as opposed to natural selection).
  • Starting in the 1970’s, humans started modifying the DNA directly of plants and animals, creating GMO foods, etc.
  • Today, half a million babies are born each year using in vitro fertilization (IVF). Increasingly, this includes sequencing the embryos to screen them for diseases, and bringing the most viable embryo to term (a form of genetic engineering, without actually making edits).
  • In 2018, He Jiankui created the first genetically modified babies in China.
  • In 2019, a number of FDA approved clinical trials for gene therapies have begun.

So genetic engineering is already happening on humans today, and I don’t see any reason why it would stop.

With the creation of CRISPR and similar techniques, we’ve seen an explosion in research around making actual edits to DNA. I recommend reading Jennifer Doudna and Samuel Sternberg’s book, A Crack In Creation, for a great overview of this topic.

A lot of research is happening, but actually editing human DNA won’t be allowed. You don’t actually think people should be having designer babies do you?

If it has the potential to eradicate many diseases and minimize human suffering, I think we should continue to research it, with the caution and prudence it deserves.

Some will say that every child has the right to remain genetically unmodified, and others will say that every child has the right to be born free of preventable diseases. We make many decisions on behalf of children to try and help them have a better life, and I don’t see why this would be any exception.

Many new medical treatments have similar ethical issues as they are being developed. Typically, new drugs are tested on mice, then terminally ill patients, then slowly wider sets of people. They go through FDA trials for safety and efficacy. There is a well established path to test new therapies. Genetic engineering may have more potential (both for good and for harm) than most new medical treatments, but this doesn’t mean that a similar process can’t be followed.

The American National Academy of Sciences and National Academy of Medicine also gave qualified support to human genome editing in 2017 “once answers have been found to safety and efficiency problems…but only for serious conditions under stringent oversight.”

As for “designer babies”, people use this term to mean choosing traits like height or eye color that are not related to health. I do think some parents will want to choose attributes like these, but this isn’t where most of the potential benefits will come from. I’ll discuss this more a bit lower down.

Finally, it won’t just be babies. Adults will be genetically modified at some point as well.

I don’t know. It just seems wrong to “play god” and move into this territory.

Think about surgery. Three hundred years ago, it must have seemed quite strange to “play god” and cut open a human body. Surgery was also an incredibly risky and crude process (someone’s arm or leg might be amputated on a battlefield in an attempt to save their life, for instance). Over time, surgery became much safer, and we started to use it in less life threatening situations. Today, people undergo purely elective or cosmetic surgery.

The same thing will likely be true with genetic engineering. It may start off being used only in dire situations where people have no other options, but eventually it could become safe enough where people genetically modify themselves for purely cosmetic reasons (for example, to change their hair color). In my view, there is nothing inherently wrong with people wanting to change, improve, or heal their own bodies, even if some uses are more urgent than others. And everyone should make this choice for themselves (I wouldn’t presume to make the choice for them).

We won’t know the long term effects on people for many decades. I certainly wouldn’t want to be one of the first to get it done!

There is a misconception that the first edits made in humans will be totally unpredictable. There are some genes that one in ten people on earth have, that makes them healthier in some way. It will be safer than many people think to introduce this gene into someone who doesn’t have it, since it can be widely studied in the existing population. Most new drugs are introduced into the market with just hundreds or thousands of people who have taken it during trial periods, and this is a sufficient bar to demonstrate safety. So a gene that a billion people in the world already have could potentially be far safer than any new drug that has ever come to market.

In addition, new therapies are often tested on terminally ill people who have no other options, so healthy people likely wouldn’t be the initial market.

This doesn’t mean that there can’t be other risks in the procedure, but the idea that an edit to a human genome would have entirely unpredictable results is false.

Many conditions are not controlled by one or two genes. So it won’t be as simple as you say to eradicate disease.

This is true. Diseases exist on a spectrum from having a single gene culprit to having many thousands of risk variants which increase or decrease susceptibility to environmental factors. A growing body of research is advancing from uncovering these monogenic (single gene) causes of diseases to uncovering the causes of more complex (polygenic) diseases. Results are improving quickly as a consequence of larger datasets, cheaper sequencing, and use of machine learning.

Even in a world where only simple gene edits were possible, a lot of human suffering could be eliminated. For instance, Verve is developing gene therapies to make heart disease, one of the leading causes of death in the world, less prevalent with relatively small edits. But other conditions, like depression or diabetes, don’t seem to be caused by a single gene, or even a handful of genes.

Luckily, machine learning (and techniques like deep learning) are well suited to solving complex, multi-variate, problems like polygenic risk scoring, and machine learning is improving at an incredible rate right now. Companies like GenomicPrediction have started offering polygenic risk scores to expecting parents. In addition, the datasets of sequenced genomes keep getting larger (some have over a million sequenced genomes at this point) which will improve the accuracy of the machine learning models over time.

Many things aren’t controlled by genetics. You can’t make happy/healthy humans just with genetic engineering.

Also true. There are many environmental and lifestyle factors to consider, in addition to genetics. The lifestyle/nurture components are hard challenges in their own right, but thankfully we have some amount of control over them. For instance, we can eat healthier food, go for walks, or exercise. But in contrast, we have very little control of our genetics today.

Most people take it as a given that they can never change their genes, which is actually quite sad if you think about it. It feels terrible to be stuck in any situation where you’re powerless to change it. Imagine the person who continually struggles with their weight, no matter how much they focus on exercise and diet, comparing themselves to people who seem to eat whatever they want without gaining a pound. Nature can be very cruel to us, and genes can create an uneven playing field in life. Genetic engineering may not be the whole solution, but it would certainly unlock a big piece of it.

It’s a slippery slope from disease prevention to enhancement, where do we draw the line?

The likely answer is that there isn’t a clear line, and we won’t draw one. The overton window will continue to shift as people become more comfortable with genetic engineering.

Genetic engineering will start by being focused on disease prevention, because this is the most socially acceptable form of it at the moment. But, for instance, if you have a gene that creates low bone density (making you predisposed to osteoporosis), and you correct this with genetic engineering, are your stronger bones preventing disease or are they an enhancement (enabling you to play sports and lift heavy things)? The answer is both. There are many blurry lines like this. To me, the goal is just to improve the human condition, so the distinction between preventing bad outcomes and creating good outcomes is less relevant.

In addition, it is worth noting that we do things all the time today to “enhance” the human body (wearing running shoes, putting on sunblock, corrective lens, etc). And we even do things to enhance ourselves genetically today, like choosing who to have children with or couples who do IVF screening. Genetic enhancement may be scary to some people today, but I think this is mainly just because it is new. Over time, it could be considered as normal as getting LASIK surgery to fix your eyesight.

If everyone wants to have a certain trait, won’t this create less diversity in the world?

There are some genes, like those which increase your risk of heart disease, which most people will want to eliminate. So in that sense there might be less genetic diversity. But I don’t think this will be an overwhelming trend for two reasons. The first is that there is great variety in human preferences (in what is considered beautiful, for instance) and the second is that many people have a desire to stand out and be unique. If it becomes cheap and ubiquitous to become some definition of beautiful then it will no longer hold the same cache, and preferences will evolve, just like in fashion. When you can be whomever you want, I think we’ll actually see much greater diversity, not less.

You can see a glimpse of what this might look in video games today, where people can create their own avatar. When people can be whatever character they want, the range of expression is much greater than in real life.

Genetic engineering could also help same-sex couples have genetically related children, which would be a new development. And it could even lead to children which are the product of more than two people. Imagine a child that is the product of ten, or even a hundred, people.

Finally, we may see people change themselves in ways that can’t occur naturally today (webbed fingers? scales? night vision like a cat?). If we are truly able to master genetic engineering over the coming century, there will be many beautiful new forms of individual expression that we can’t even imagine today. The very idea of what it means to be human will change.

Many great entrepreneurs and artists had ADHD, Autism, depression, schizophrenia and other conditions which people may want to eliminate with genetic engineering. In this world, wouldn’t these qualities be eliminated in the name of conformity and risk aversion?

I don’t think so. Parents aspire for their children to be all sorts of things in life: artists, scientists, politicians, generals, religious leaders, entrepreneurs, etc. These each might have some genetic traits in common, and others that are very different. If it turned out that the best chance of becoming a successful artist was to start with a certain set of genes that included ADHD, I suspect many parents would still opt for this.

We will probably find ourselves in a world with far more brilliant outliers, if parents can get a genetic head start on raising the next Picasso or Einstein. Other parents will opt for balance. There is no right or wrong answer, just preferences.

Finally, just because we see examples like the above today, doesn’t mean this needs to be the case in the future. Brilliant people are often “spikey” (outliers in a few areas with severe deficiencies in others), but in a world where genetic engineering is mastered there may be people with all the upside (and more), with little or none of the down side, so there is no guarantee the two need to be linked.

Will this lead to modern day eugenics?

I don’t think so. Historical eugenics was defined by government and political groups trying to modify the gene pool through force. By contrast, gene therapies introduced in modern day will increase choices for individuals who can make their own decisions. When people can choose how they want to modify and heal themselves (and their children) I think it will, in general, be liberating.

There are people in society who might try to abuse this technology (just like any technology), but as long as it is broadly available I think this mitigates a lot of the risk. It’s unlikely that one country or political group would have exclusive access to genetic engineering for long (it is widely researched globally, with a lot of information exchange between groups, both formally and informally).

Some day, genetic engineering may even make it possible to create people who are more tolerant and accepting of others around them. Tribalism is a part of our evolution, and it may have a genetic component. Even children exhibit this quality from a young age. How interesting would it be if people were able to change on this dimension genetically? We don’t know how to do this yet, but it could be possible in the future.

Won’t this create a world of haves and have nots? What if it is only available to rich people? What if it turns out like Gattaca?

Just like many technologies, genetic engineering will almost certainly be available in developed countries first, and it will be expensive. But this is not unique. Cell phones, airplanes, and even basic sanitation are all unevenly distributed around the world. The beauty of technology is that it tends to drive down costs over time, so it eventually reaches a wider group of people. The cell phone was once a tool only for rich people on Wall St, and it is now available to even the poorest people in the world. There is an open question about whether genetic engineering will follow a cost curve that is more like technology (lower over time following Moore’s law) or like healthcare (rising over time following Eroom’s law), but this has more to do with policy decisions than the technology itself. The main point is that high initial costs are not a good reason to prevent innovation from happening. If we took this approach, we likely wouldn’t have any of the improvements we see in the world today.

It’s also true that genetic engineering will offer advantages to those who can access it. This could create a less even playing field in some ways, but in other ways, it could actually make it more fair. Today, some people win the genetic lottery at birth while others lose (for instance, being prone to depression, a learning disability, etc). If any child could start on a level playing field genetically, this feels like a more fair world.

Finally, genetic modification can also take place in adult humans. So even if someone doesn’t have access to it at birth, they may still be able to benefit from genetic engineering later in life.

Gattaca misses this last point, implying that you will always be left behind if weren’t born into an elite group. Reality will probably afford more social mobility, with adults benefiting from new genetic engineering treatments as well. It is a very entertaining film none the less, and I suggest that anyone who is interested in the subject watch it.

What if people try to enhance traits like intelligence?

Many intelligent people exist in the world today, and, at least the ethical ones don’t seem to pose too much of a problem. So let’s say we doubled the number of smart people in the world (using IQ or whatever definition of smart you prefer) through genetic engineering, while keeping the percentage of ethical ones the same or greater. Or similarly, we could double the smartness of the existing people. Would this be a problem?

Certainly some good things would happen. The pace of improvement in society would likely increase, for instance, with many more smart, capable, people solving the world’s challenges.

The biggest negative change might be that the rest of us feel a little left behind or bewildered by all the new progress and areas of research, if we didn’t similarly have our intelligence increased. This boils down to a question of whether you think we should value overall growth in society, or one’s relative place in it, more highly. Each person should answer this for themselves (I don’t think there is one right answer).

So it could be a mixed outcome, or very good, depending on your perspective. (Side note: this is a great short story about what it might feel like as society begins to advance.)

One final thought experiment: if people want to become smarter, do we have the right to stop them? If it is by getting an education, most people would say no. If it is through genetic engineering, how is this different?

Should parents be able to choose the genes of their child?

In general, I think yes, because parents choose all sorts of things that have a major impact on their children (what they eat, how they are educated, whether they are born at all, etc) as their guardian. This is a well established concept in the law today, with guardians making major decisions for a child until they turn 18 (or an equivalent age in each country). Once children come of age, they will likely take control of their genetic modification, just as they can make a decision to get a tattoo.

It would be a shame if the genes that parents chose for their children were fixed indefinitely into the future. As I’ve discussed elsewhere, it’s likely in the future that genes can be modified in living people, not just embryos. So hopefully children aren’t stuck with their parent’s genetic preferences for life.

Imagine that you’re an expecting parent. How much would you pay to have the peace of mind that your child will arrive healthy? Imagine you were an adult with a life threatening disease. How much would you pay to receive a cure that required a genetic edit? The answer to these questions says a lot about how genetic engineering is likely to be adopted in the future.

Today, it is widely considered to be unconscionable to genetically modify humans. But I believe that within twenty years, we will see this view change dramatically, to a point where it will be considered unconscionable not to genetically modify people in many cases.

Genetic engineering is one of the highest potential areas of research today. I believe we should continue to invest it, and entrepreneurs should work hard to bring new products to market in this space. Yes, it has risks, and we must proceed with caution. But many new technologies have risks — even life threatening ones — and we eventually are able to use them to greatly benefit the world. We shouldn’t let fear hold back progress on promising new areas of research.

If you have any comments or questions about this post, or just want to stay updated on this space, please send me a note here. Thank you!


World’s First Genetically Modified CRISPR Babies Born in China

Here it is: the moment historians will look back upon as the dawn of Homo sapiens superior and the moment when us natural-borns get knocked down a peg on the social hierarchy . For decades, science fiction writers have foretold a future in which genetically-superior humans made possible by gene modification techniques will rise above us lowly normies with their enhanced intelligence and physiology, greater resistance to diseases, and stunning good looks of course. The prospect of editing the human genome has remained taboo, though, for longstanding ethical and moral reasons. Naturally, human-rights-optional China has ignored these and blazed ahead and given the world its first two genetically-modified superbabies whether we want them or not. It begins .

Evolution is just too slow.

This isn’t the first time Chinese scientists have tested CRISPR on humans. As early as 2015, Chinese researchers were already altering the genomes of human embryos in laboratories – embryos which were never gestated. Now, geneticists Southern University of Science and Technology, in Shenzhen have taken these techniques one step further by modifying the genomes of two embryos which were implanted into a human womb via in vitro fertilization. Those embryos are now two happy and healthy baby girls, Lulu and Nana. Scientists led by He Jiankui altered the girls’ genomes so that they will be immune to HIV – in theory. In statements made this week, He assures that the only changes made to the girls’ genomes were to the “doorway” which would allow HIV to potentially infect the girls. Who knows what unforeseen consequences might arise from the editing process, though?

The research has not yet been submitted for peer review and publication, so many scientists remain skeptical of the Chinese team’s claims. Jennifer Doudna, a biochemist at the University of California, Berkeley who helped develop CRISPR-Cas9 gene editing, warns that this trial is a “break from the cautious and transparent approach of the global scientific community’s application of CRISPR-Cas9 for human germline editing” adding that she and other scientists around the world are still “struggling to figure out what was done and also whether the process was done properly. We just don’t know yet.”

What will the future bring us now that we have the potential to alter the human genome as we see fit?

Many nations experimented with eugenics and other controlled breeding programs throughout the 20th century, but the advances made by CRISPR and other recent technologies let scientists remove all uncertainty from the equation (in theory) and edit the human genome on a gene-by-gene basis, opening the doors for all sorts of modifications with unknown long-term consequences.

While removing the chance for these girls to contract HIV can’t possibly be seen as a bad thing, this trial is the first to go over the apex and start sliding down the slipperiest of slopes. What’s next? Removing all cancer genes? Sure. Eradicating mental illnesses through removing their gene markers? Go right ahead. Creating an army of genetic Übermensch (more like 超人) capable of crushing genetically inferior opposing forces?


GM 2.0? ‘Gene-editing’ produces GMOs that must be regulated as GMOs

Australian farmer Geoffrey Carracher, who is against GM farming, with some canola seed that has been cross contaminated with GM seed from a nearby farm. Photo: Craig Sillitoe via Flickr (CC BY-NC-SA).

This is an important article about the battle to ensure that gene-editing is treated under GM regulations in the EU. It’s relevant to synbio as many of the new “breeding techniques” under consideration by the EU – such as the genome editing described in the article below – are increasingly seen as common practices within the synthetic biology field, and contribute towards the establishment of a broader bioeconomy.

by Janet Cotter & Ricarda Steinbrecher (Ecologist)

The EU is considering the exclusion of gene-edited plants and animals from GM regulations, write Janet Cotter & Ricarda Steinbrecher. However gene-edited organisms clearly fall within the definition of GMOs in both European and international law. They also present real risks to the environment and human health – and must be regulated like any other GMOs.

There has been a lot in the news recently about the ethics of gene editing in humans.

But, as yet largely unnoticed is that the European Commission is considering whether the gene-editing of plants and animals, for example in agriculture, falls outside the scope of EU regulations governing genetically modified organisms (GMOs).

In other words, whether the products of gene-editing should be labelled and regulated as GMOs, or allowed to enter the food chain untested and unlabelled.

If you believe the proponents’ claims, gene-editing is nothing more than the ‘tweaking’ of DNA in plants and animals – nothing to be concerned about.

But the reality is that gene editing is simply GM 2.0, with many of the same concerns and problems as the GM crops that Europeans have already rejected.

What is gene-editing?

Gene-editing is a form of genetic engineering. It covers a range of new laboratory techniques that, just as older genetic engineering techniques, can change the genetic material (usually DNA) of a living organism, for example a plant or an animal, without breeding.

In many respects, they are similar to the ‘traditional’ genetic engineering we are familiar with. The difference is that these techniques can change the DNA of the plant or animal at a specific ‘targeted’ location, compared to the insertion of genes at random locations characteristic of previous techniques.

Many of these techniques can be used to insert genes from an unrelated species into a plant or animal as traditional genetic engineering does and the resulting products, with their novel genes would be regarded as GMOs. But not all the applications of gene-editing involve the insertion of novel genes.

The current debate surrounds applications of gene-editing that, instead of inserting genes, re-write genes using a sort of ‘DNA typewriter’. The question is whether plants and animals with ‘edited’ genes (without inserted novel genes) should be regulated as GMOs.

Products of gene editing with re-written genes that might be imported, grown or farmed in Europe in the near future, including the UK, include a herbicide-tolerant oil seed rape, produced by a technique known as oligonucleotide directed mutagenesis (ODM), and hornless cattle, developed through a technique known as ‘CRISPR’.

CRISPR is becoming well known in scientific circles as it’s a particularly efficient method of gene-editing.

The risks of gene-editing?

With current commercial GM crops, one of the major concerns is that unexpected effects can result, and have resulted, from the genetic engineering process, and these can affect food and environmental safety. These effects can include, altered levels of toxins or nutritional compounds and changes to the protein chemistry, which could produce new allergens.

That is why the EU has set up regulations for GM organisms, requiring them to undergo an environmental and health risk assessment before they are grown or reared commercially or enter the food chain. Even so, doubts linger as to the effectiveness of these assessments.

‘Traditional’ genetic engineering involves the random insertion of genes (or genetic sequences) into an organism’s genome. Proponents tell us that gene-editing is far more precise than the genetic engineering techniques we are familiar with. But what exactly is meant by ‘precise’ here?

Gene-editing techniques may perhaps be more precise at the level and point where the DNA is altered but how this altered DNA might affect interactions with other genes and processes within the cell is largely unknown. Importantly, these gene-to-gene interactions within the cell are reflected in the organism as a whole.

The effects of the altered DNA on the wholesomeness as a foodstuff and how the organism interacts with the environment are far from being precisely known. Therefore, although gene-editing may be more precise in the intended location where the DNA is modified, there is still potential for unexpected and unpredictable effects.

Such effects could have implications for food, feed and environmental safety if they increase levels of toxic compounds, reduce levels of nutritional compounds or even produce new allergens.

‘Off-target’ genetic alterations

Just like traditional genetic engineering, gene-editing techniques can cause unintended alterations in the DNA. For example, several gene-editing techniques use so-called ‘molecular scissors’ to cut DNA as part of the editing process.

These molecular scissors sometimes have what is known as ‘off-target’ effects. This means the DNA is cut in unintended places as well as the intended places, accidentally causing additional genetic alterations.

Other gene editing techniques such as ODM could also edit DNA in the wrong place. In addition, the newly edited gene could interact with other genes in different ways, affecting protein composition and production, chemistry and metabolism.

Many of the gene-editing techniques are so new that it is not yet possible to fully evaluate the potential for and consequences of unintended changes. Importantly, just because gene-edited organisms don’t contain foreign DNA, this doesn’t make them safe.

Furthermore, there is increasing evidence of ‘off-target’ effects. The intended change (e.g. tolerance to a herbicide or cattle without horns) may be clear to see, but the unintended changes aren’t immediately apparent, and certainly not apparent if they aren’t looked for. It’s a case of ‘don’t look, won’t find’.

The law is clear: gene-editing is still genetic engineering

The question being debated in the EU at the moment is whether small ‘edits’, i.e. changes, insertions or deletions, of segments of DNA without the insertion of new genes are also to be considered as producing a GMO, or fall outside the scope of European law.

At the core of this debate is the question of what is the distinction between conventional breeding that involves mating and GMOs. In both the EU law (Directive 2001/18) (See Article 2(2) and Annexes, below) and the UN agreement on GMOs – the Cartagena Protocol, made under the Convention on Biological Diversity – GMOs involve novel arrangements of genetic material that do not occur naturally, and alterations to genetic material being made directly without mating.

The Directive contains annexes which define exactly what techniques of genetic alteration do, and do not, fall under the definition (reproduced in full below). However gene-editing simply is not mentioned: the technology did not exist in 2001 when the law was written. That means we have to rely on the initial definition:

“‘Genetically modified organism (GMO)’ means an organism, with the exception of human beings, in which the genetic material has been altered in a way that does not occur naturally by mating and/or natural recombination”

Likewise, the Cartagena Protocol, adopted in 2000, does not specifically list gene-editing as a technology included in its definition (full version below). However the technology, once again, is encompassed by the simple meaning of the words:

“‘Living modified organism’ means any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology … ‘Modern biotechnology’ means the application of: a. In vitro nucleic acid techniques, including … “

In terms of the Directive, it is accurate to say that in gene-edited organisms “the genetic material has been altered in a way that does not occur naturally by mating and/or natural recombination”.

In terms of the Cartagena Protocol, in is accurate to say that a gene-edited organism “possesses a novel combination of genetic material obtained through the … application of … In vitro nucleic acid techniques”.

So in fact – despite the abstruse legal arguments deployed by GM advocates – the law is perfectly clear on the issue. According to the both the EU and Cartagena definitions, gene-editing produces GMOs.

Therefore to remove or exempt gene-editing from regulation, as GM advocates wish, the EU would need to amend the existing Directive. If it tried to interpret the Directive as GM advocates wish, the decision would surely by challenged in the European Court, for example by one of the many EU countries opposed to the use of GMOs in farming – where in our opinion it should struck down.

Does is matter if gene-editing is not classed as a GM technique?

If crops and animals developed by gene-editing techniques are officially considered non-GM, or exempted from the EU GMO laws, then they will enter the food chain and the environment completely unregulated and unlabelled.

This means there would be no assessment of food or environmental safety no requirement to detect any unintended alterations to the organisms’ DNA or its consequences and no assessment of the implications of the trait produced by gene editing (e.g. herbicide tolerance).

Gene-edited foodstuffs would not have to be labelled. European consumers have resoundingly said “No!” to GM crops, yet there would be no way for consumers and farmers to avoid gene-edited crops and animals if they were not classified (and hence labelled) as GMOs.

Importantly, although gene-editing might be promoted as causing only small changes in DNA, it can be used repeatedly to achieve substantial changes to one or even several genes. This raises the concern that the alterations could involve the introduction of, for example, whole new chemical pathways within a plant or animal with a high potential for unexpected effects.

Such organisms would end up in our environment and on our dinner plates completely unregulated if gene-editing techniques are not encompassed by the GMO regulations.

The EU’s GMO laws were devised to protect against the risk of organisms developed by the direct alteration of genetic material using modern biotechnologies (e.g. by in vitro techniques) entering the environment and food chain.

It’s clear that gene-edited crops and animals need to be assessed as GMOs in the same ways as current GM crops. Otherwise EU citizens will unwittingly be exposed to the risks of genetic engineering without testing or labelling, as will the environment, biodiversity and agriculture.

Dr Janet Cotter runs an environmental consultancy, Logos Environmental. She was previously Senior Scientist with Greenpeace International for 15 years.

Dr Ricarda Steinbrecher is a biologist, geneticist and co-director of EcoNexus. She has worked on GMOs since 1995, especially UN-led processes on Biosafety, the risk assessment of genetically engineered organisms and synthetic biology. She’s a founding member of the European Network of Scientists for Social and Environmental Responsibility and works with civil society and small-scale farmer groups world-wide.

Additional reporting by The Ecologist.

Further reading

Cartagena Protocol – use of terms

(g) “Living modified organism” means any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology

(h) “Living organism” means any biological entity capable of transferring or replicating genetic material, including sterile organisms, viruses and viroids

(i) “Modern biotechnology” means the application of:
a. In vitro nucleic acid techniques, including recombinant deoxyribonucleic acid (DNA) and direct injection of nucleic acid into cells or organelles, or
b. Fusion of cells beyond the taxonomic family, that overcome natural physiological reproductive or recombination barriers and that are not techniques used in traditional breeding and selection

Directive 2011/18, Article 2(2) & Annexes

“Genetically modified organism (GMO)” means an organism, with the exception of human beings, in which the genetic material has been altered in a way that does not occur naturally by mating and/or natural recombination

Within the terms of this definition:

(a) genetic modification occurs at least through the use of the techniques listed in Annex I A, part 1

(b) the techniques listed in Annex I A, part 2, are not considered to result in genetic modification

TECHNIQUES REFERRED TO IN ARTICLE 2(2)

Techniques of genetic modification referred to in Article 2(2)(a) are inter alia:

(1) recombinant nucleic acid techniques involving the formation of new combinations of genetic material by the insertion of nucleic acid molecules produced by whatever means outside an organism, into any virus, bacterial plasmid or other vector system and their incorporation into a host organism in which they do not naturally occur but in which they are capable of continued propagation

(2) techniques involving the direct introduction into an organism of heritable material prepared outside the organism including micro-injection, macro-injection and micro-encapsulation

(3) cell fusion (including protoplast fusion) or hybridisation techniques where live cells with new combinations of heritable genetic material are formed through the fusion of two or more cells by means of methods that do not occur naturally.

Techniques referred to in Article 2(2)(b) which are not considered to result in genetic modification, on condition that they do not involve the use of recombinant nucleic acid molecules or genetically modified organisms made by techniques/methods other than those excluded by Annex I B:

(2) natural processes such as: conjugation, transduction, transformation,

TECHNIQUES REFERRED TO IN ARTICLE 3

Techniques/methods of genetic modification yielding organisms to be excluded from the Directive, on the condition that they do not involve the use of recombinant nucleic acid molecules or genetically modified organisms other than those produced by one or more of the techniques/methods listed below are:

(2) cell fusion (including protoplast fusion) of plant cells of organisms which can exchange genetic material through traditional breeding methods.


DETECTION OF GENOME ALTERATIONS VIA -OMICS TECHNOLOGIES

In the last 15 years, various advanced technologies have been developed that permit accumulation and assessment of large-scale datasets of biological molecules, including DNA sequence (the genome), transcripts (the transcriptome involving RNA), DNA modification (the epigenome), and, to lesser extents, proteins and their modifications (the proteome) and metabolites (the metabolome). Such datasets enable comparative analyses of non-GE and GE lines in such a way that effects on plant gene expression, metabolism, and composition can be assessed in a more informed manner. Access to the technologies also permits analysis of the extent of the natural variation in a crop species at the DNA, RNA, protein, metabolite, and epigenetic levels, enabling determination of whether variation in GE crops is within the range found naturally and among cultivars. As discussed below for each of the -omics data types, technologies to access the molecules were relatively recent as of 2015 but were advancing rapidly. Some technologies were ready to be deployed to generate datasets for assessment of the

effects of genetic-engineering events when the committee&rsquos report was being written. Others will improve in precision and throughput in the coming decade and may someday be useful technologies for assessing effects of genetic-engineering events. The Precision Medicine Initiative announced by President Obama in January 2015 6 focuses on understanding how genetic differences between individuals and mutations present in cancer and diseased cells (versus healthy cells) affect human health. An analogous project that uses diverse -omics approaches in crop plants with genetic engineering and conventional breeding could provide in-depth improvements in the understanding of plant biological processes that in turn could be applied to assessing the effects of genetic modifications in crop plants.

Genomics

One way to ascertain whether genetic engineering has resulted in off-target effects (whether through nuclear transformation with Agrobacterium or gene guns, RNAi, or such emerging technologies as genome editing) is to compare the genome of the GE plant with an example&mdashor reference&mdashgenome of the parent non-GE plant. The reference genome is like a blueprint for the species, revealing allelic diversity and identifying the genes associated with phenotype. Knowing the variation that occurs naturally in a species, one can compare the engineered genome with the reference genome to reveal whether genetic engineering has caused any changes&mdashexpected or unintended&mdashand to gain context for assessing whether changes might have adverse effects. Because there is inherent DNA-sequence variation among plants within a species, and even between cultivars, any genetically engineered changes would need to be compared to the non-GE parent and the range of natural genomic variation. That is, changes made by genetic engineering must be placed in an appropriate context.

Background

In July 1995, the first genome sequence of a living organism, the bacterium Haemophilus influenza (1,830,137 base pairs), was reported (Fleischmann et al., 1995). This paradigm-changing technological achievement was possible because of the development of automated DNA-sequencing methods, improved computer-processing power, and the development of algorithms for reconstructing a full genome on the basis of fragmented, random DNA sequences. In October 1995, the genome of the

bacterium Mycoplasma genitalium was released (Fraser et al., 1995) this solidified whole-genome shotgun sequencing and assembly as the method for obtaining genome sequences. In the next two decades, higher throughput and less expensive methods for genome sequencing and assembly emerged (for review, see McPherson, 2014) and enabled the sequencing of the genomes of hundreds of species, as well as thousands of individuals, in all kingdoms of life. For example, since the release of the draft sequence of the human reference genome in 2001 (Lander et al., 2001 Venter et al., 2001), thousands of individual human genomes have been sequenced, including such comparative genome-sequencing projects as: a deep catalog of human variation of thousands of individuals, 7 normal versus tumor cells from a single individual, families with inherited genetic disorders, and diseased versus healthy populations. Those projects have focused on detecting the allelic diversity in a species and associating genes with phenotypes, such as the propensity for specific diseases.

Limitations in Current De Novo Genome Sequencing and Assembly Methods for Plants

Current methods to sequence a genome and assemble a genome de novo entail random fragmentation of DNA, generation of sequence reads, and reconstruction of the original genome sequence by using assembly algorithms. Although the methods are robust and continue to improve, it is important to note that they fail to deliver the full genome sequence of complex eukaryotes. Indeed, even the human genome sequence&mdashfor which billions of dollars have been spent to obtain a high-quality reference genome sequence that has provided a wealth of useful information in understanding of human biology, including cancer and other diseases&mdashis still incomplete. For plants, the benchmark for a high-quality genome assembly is that of the model species Arabidopsis thaliana, which has an extremely small genome that was published in 2000 (Arabidopsis Genome Initiative, 2000). More than 15 years after the release of the A. thaliana reference genome sequence and with the availability of sequences from more than 800 additional accessions, 8 an estimated 30&ndash40 million nucleotides of sequence were still missing from the A. thaliana Col-0 reference genome assembly (Bennett et al., 2003). Most of the missing sequences are highly repetitive (such as ribosomal RNA genes and centromeric repeats), but some gene-containing regions are absent because of technical challenges.

7 1000 Genomes: A Deep Catalog of Human Genetic Variation. Available at http://www.1000genomes.org/. Accessed November 12, 2015.

8 1001 Genomes: A Catalog of Arabidopsis thaliana Genetic Variation. Available at http://1001genomes.org/. Accessed November 12, 2015.

With increased genome size and repetitive-sequence complexity, complete representation of the genome sequence becomes more challenging. Indeed, the genome assemblies of most major crop species (maize, wheat, barley, and potato) are all of only draft quality and have substantial gaps (Schnable et al., 2009 Potato Genome Sequencing Consortium, 2011 International Barley Genome Sequencing, 2012 Li et al., 2014a) none provides a complete, full representation of the genome.

In several major crops, when the committee was writing its report, projects equivalent to the human 10,000-genomes project were under way to determine the overall diversity of the species by documenting the &ldquopan-genome&rdquo (Weigel and Mott, 2009). It has been surprising in several of these studies that there is substantial genomic diversity in some plant species not only in allelic composition but also in gene content (Lai et al., 2010 Hirsch et al., 2014 Li et al., 2014b). Thus, a single &ldquoreference&rdquo genome sequence derived from a single individual of a species will fail to represent the genetic composition and diversity of the overall population adequately and will therefore limit interpretations of directed changes in the genome (such as ones that can be delivered by emerging genome-editing methods that are being used to generate GE crops).

Resequencing: Assessing Differences Between the Reference and Query Genome

Once the DNA sequence of a crop&rsquos genome is assembled well enough to serve as a reference genome, resequencing becomes a powerful and cost-effective method for detecting genomic differences among related accessions (individuals) or GE lines. Resequencing entails generating random-sequence reads of the query genome (the genome that is being compared with the reference genome), aligning those sequence reads with a reference genome, and using algorithms to determine differences between the query and the reference. The strengths of this approach are that it is inexpensive and permits many query genomes to be compared with the reference genome and thereby provides substantial data about similarities and differences between individuals in a species (Figure 7-5). However, limitations of the approach can affect determination of whether two genomes are different. First, sequence read quality will affect data interpretation in that read errors can be misinterpreted as sequence polymorphisms. Second, the coverage of sequence reads generated can limit interrogation of the whole genome because the sampling is random and some regions of the genome are underrepresented in the read pool. Third, library construction 9 and sequencing

9 A library of DNA sequence is made by generating random fragments of the genome that collectively represent the full sequence of the genome.

FIGURE 7-5 Detection of genome, epigenome, transcriptome, proteome, and metabolome alterations in genome-edited, genetically engineered plants.
SOURCE: Illustration by C. R. Buell.
NOTE: To perform various -omics assessments of genome-edited plants, both the wild-type (unmodified) and the genome-edited plant are subjected to genome sequencing, epigenomic characterization, transcriptome profiling, proteome profiling, and metabolite profiling. A, genome sequencing is performed on both the wild-type and genome-edited accession, and differences in the DNA sequence (red G) are detected with bioinformatics methods. B, changes in the epigenome are assessed with bisulfite sequencing and chromatin immunoprecipitation with

bias will affect which sequences are present in the resequencing dataset and consequently available for alignment with the reference genome. Fourth, read-alignment algorithms fail to detect all polymorphisms if the query diverges too widely from the reference, especially with insertions and deletions or with SNPs near them. Fifth, read alignments and polymorphism detection are limited to nonrepetitive regions of the genome, so regions that are repetitive in the genome cannot be assessed for divergence. Although obstacles remain, resequencing is a powerful method for measuring differences in genome sequences between wild-type plants (normal untransformed individuals) and engineered plants. With expected improvements in technology, the resolution of resequencing to reveal differences between two genomes will improve.

Computational Approaches

Alternatives to resequencing approaches to identify polymorphisms in DNA sequence between two genomes were emerging when the committee was writing its report. The foundation of computational approaches to identify polymorphisms is algorithms that perform k-mer counting (a k-mer is a unique nucleotide sequence of a given length) in which unique k-mers are identified in two read pools (for example, wild type and mutant) and k-mers that differ between the two samples are then computationally identified. Those k-mers are then further analyzed to identify the nature of the polymorphism (SNP versus insertion or deletion) and to associate the polymorphism with a gene and potential phenotype (Nordstrom et al., 2013 Moncunill et al., 2014). The sensitivity and specificity of such programs are comparable with or better than the current methods that detect SNPs and

antibodies that target modified histones that are associated with chromatin lollipops signify methylated cytosine residues. C, transcriptome sequencing is used to measure expression abundances in wild-type (WT) and genome-edited (GE) lines in example shown, expression ranges from 0 to 15 for genes A through J variance in all the genes is apparent, with only gene F showing substantial expression differences between the wild-type and the genome-edited line as would be expected in a knockout line. D, proteomics is used to measure differences in protein abundance in wild-type vs genome-edited line all proteins are equally present in wild-type and genome-edited lines (yellow dots), whereas protein F is present only in wild-type line (green dot), as expected from a knockout line. E, levels of metabolites A through M in wild-type and genome-edited lines levels of metabolite F are zero in contrast to the wild type, as would be expected in a knockout line.

insertions/deletions by using genome-sequencing methods and thus have the potential to identify more robustly genome variation introduced through genetic engineering. The committee expects the field to continue to develop rapidly and to enable researchers to read genomic DNA with increased sensitivity and specificity.

Utility of Transcriptomics, Proteomics, and Metabolomics in Assessing Biological Effects of Genetic Engineering

As stated in the 2004 National Research Council report Safety of Genetically Engineered Foods, understanding the composition of food at the RNA, protein, and metabolite levels is critical for determining whether genetic engineering results in a difference in substantial equivalence compared to RNA, protein, and metabolite levels in conventionally bred crops (NRC, 2004 see Chapter 5). Although the genome provides the &ldquoblueprint&rdquo for the cell, assessment of the transcriptome, proteome, and metabolome can provide information on the downstream consequences of genome changes that lead to altered phenotype. Methods used to assess transcripts, proteins, and metabolites in plants are described below with the committee&rsquos commentary on limitations of the sensitivity and specificity of detection and interpretation that existed when this report was being written. One caveat in the use of any of these techniques is related to inherent biological variation regardless of genetic-engineering status. Even with identical genotypes grown under identical conditions, there is variation in the transcriptome, proteome, and metabolome. Scientists address such variation by using biologically replicated experiments and multiple -omics and molecular-biology approaches. In addition to biological variation, allelic variation results in different levels of transcripts, proteins, and metabolites in different accessions. To provide context to any observed changes in the transcriptome, proteome, or metabolome attributable to a genetic-engineering event, the broader range of variation in commercially grown cultivars of a crop species can be compared with that of a GE line to determine whether modified levels are outside the realm of variation in a crop. Thus, in assessment of GE crops, interpretation must be in the context of inherent biological and allelic variation of the specific crop. Assessment is also made difficult by the fact that scientists have little or no knowledge of what functions a substantial number of genes, transcripts, proteins, and metabolites perform in a plant cell.

Transcriptomics

Advancements in high-throughput sequencing technologies have enabled the development of robust methods for quantitatively measuring

the transcriptome, the expressed genes in a sample. One method, known as RNA sequencing (RNA-seq), entails isolation of RNA, conversion of the RNA to DNA, generation of sequence reads, and bioinformatic analyses to assess expression levels, alternative splicing, and alternative transcriptional initiation or termination sites (Wang et al., 2009 de Klerk et al., 2014). This method can be applied to mRNA, small RNAs (which include interfering RNAs involved in RNAi), total RNA, RNA bound to ribosomes, and RNA-protein complexes to gain a detailed assessment of RNAs in a cell. Methods to construct RNA-seq libraries, generate sequence reads, align to a reference genome, and determine expression abundances are fairly robust even with draft genome sequences if they provide nearly complete representation of the genes in the genome (Wang et al., 2009 de Klerk et al., 2014). Statistical methods to determine differential expression between any two samples, such as two plants with identical genotypes at different developmental stages, are continuing to mature but are limited by inherent biological variation in the transcriptome. Indeed, variation between independent biological replicates of wild-type tissues is well documented. For example, estimation of whole-transcriptome expression abundance in independent biological replicates of a given experimental treatment is considered to be highly reproducible if Pearson&rsquos correlation values are more than 0.95 values greater than 0.98 are typically observed. However, even with high Pearson&rsquos correlation values, numerous genes may exhibit different expression among biological replicates. Thus, differential gene expression in GE plants would need to be compared with the observed variation in gene expression in biological replicates of untransformed individuals to ensure the absence of major effects of the genetic-engineering event on the transcriptome.

Overshadowing any expression differences discovered between a wild-type plant and an engineered plant is the fact that little is known about the exact function of a substantial number of genes, transcripts, and proteins for any plant species. In maize, nearly one-third of the genes have no meaningful functional annotation even when informative functional annotation is provided, the annotation was most likely assigned by using automated transitive annotation methods that depend heavily on sequence similarity. Thus, even if differentially expressed genes are detected between the wild-type and GE samples, interpreting them in the context of health or effects on the ecosystem may be challenging at best. For example, a study of the effects of expression of the antifungal protein in rice that was introduced with genetic engineering showed changes in about 0.4 percent of the transcriptome in the GE lines (Montero et al., 2011). Analysis of 20 percent of the changes indicated that 35 percent of the unintended effects could be attributed to the tissue-culture process used for plant transformation and regeneration, whereas 15 percent appeared to be event-specific and attributable to the presence of the

transgene. About 50 percent of the changes that were attributed to the presence of the transgene were in expression of genes that could be induced in the non-GE rice by wounding. It is impossible to determine whether the changes in transcript levels recorded in the study indicate that the GE rice might be worse than, equal to, or better than its non-GE counterpart as regards food safety. One way to assess the biological effects of genetic engineering on the transcriptome is to include a variety of conventionally bred cultivars in the study and determine whether the range of expression levels in the GE line falls within the range observed for the crop, but this method will not provide definitive evidence of food or ecosystem safety.

Proteomics

Several methods permit comparison of protein composition and post-translational protein modifications between samples (for review, see May et al., 2011). For example, two-dimensional difference in-gel electrophoresis permits quantitative comparison of two proteomes through differential labeling of the samples followed by separation and quantification (Figure 7-5 D). In mass spectrometry (MS), another method for examining the proteome, proteins are first broken into specific fragments (often by proteases, which are enzymes that catalyze the cleavage of proteins into peptides at specific sites) and fractionated with such techniques as liquid chromatography. Then the mass-to-charge ratios of the peptides are detected with MS. MS data typically provide a unique &ldquosignature&rdquo for each peptide, and the identity of the peptides is typically determined by using search algorithms to compare the signatures with databases of predicted peptides and proteins derived from genome or transcriptome sequence data. Differential isotope labeling can be used in the MS approach to determine quantitative differences in protein samples. One limitation of all current proteomic techniques is sensitivity whole-proteome studies typically detect only the most abundant proteins (Baerenfaller et al., 2008). Furthermore, sample-preparation methods need to be modified to detect different fractions of the proteome (such as soluble versus membrane-bound and small versus large proteins) (Baerenfaller et al., 2008). Thus, to provide a broad assessment of the proteome, an array of sample-preparation methods must be used. Finally, as with the other -omics methods, interpretation of the significance of proteomic differences is made difficult by the fact that scientists have little knowledge of what a large number of proteins do in a plant cell.

Metabolomics

It is common practice in evaluating GE crops for regulatory approval to require targeted profiling of specific metabolites or classes of metabolites

that may be relevant to the trait being developed or that are known to be present in the target species and to be potentially toxic if present at excessive concentrations. Under current regulatory requirements, substantial metabolic equivalence is assessed on the basis of concentrations of gross macromolecules (for example, protein or fiber), such nutrients as amino acids and sugars, and specific secondary metabolites that might be predicted to cause concern.

As with genomics, transcriptomics, and proteomics, the approaches collectively known as metabolomics have been developed to determine the nature and concentrations of all metabolites in a particular organism or tissue. It has been argued that such information should be required before a GE crop clears regulatory requirements for commercialization. However, in contrast with genomic and transcriptomic approaches, with which it is now technically easy to assess DNA sequences and measure relative concentrations of most or all transcripts in an organism with current sequencing technologies respectively, metabolomics as currently performed can provide useful data only on a subset of metabolites. That is because each metabolite is chemically different, whereas DNA and RNA comprise different orderings of just four nucleotide bases. Metabolites have to be separated, usually with gas chromatography or high-performance liquid chromatography their nature and concentrations are then determined, usually with MS. The mass spectra are compared with a standard library of chemicals run on the same analytical system. The major problem for this type of metabolomic analysis of plants is the possession in the plant kingdom of large numbers of genus-specific or even species-specific natural products (see section &ldquoComparing Genetically Engineered Crops and Their Counterparts&rdquo in Chapter 5 for discussion of plant natural products). Advanced commercial platforms for plant metabolomics currently measure about 200 identified compounds, usually within primary metabolism, and less broadly distributed natural products are poorly represented (Clarke et al., 2013). However, these approaches can differentiate a much larger number of distinct but unidentified metabolites, and it is useful to know whether concentrations of a metabolite are specifically affected in a GE crop even if the identity of the particular metabolite is not known. For example, with a combination of separation platforms coupled to mass spectrometry, it was possible to resolve 175 unique identified metabolites and 1,460 peaks with no or imprecise metabolite annotation, together estimated to represent about 86 percent of the chemical diversity of tomato (Solanum lycopersicum) as listed in a publicly available database (Kusano et al., 2011). Although such an approach allows one to determine whether metabolite peaks are present in a GE crop but not in the non-GE counterpart or vice versa, metabolomics, in the absence of a completely defined metabolome for the target species in which the toxicity of all components is known, is not able to determine

with confidence that a GE or non-GE plant does not contain any chemically identified molecule that is unexpected or toxic.

An alternative approach to nontargeted analysis of metabolites is to perform metabolic fingerprinting and rely on statistical tools to compare GE and non-GE materials. That does not necessarily require prior separation of metabolites and can use flow-injection electrospray ionization mass spectrometry (Enot et al., 2007) or nuclear magnetic resonance (NMR) spectroscopy (Baker et al., 2006 Ward and Beale, 2006 Kim et al., 2011). NMR spectroscopy is rapid and requires no separation but depends heavily on computational and statistical approaches to interpret spectra and evaluate differences.

Generally, with a few exceptions, metabolomic studies have concluded that the metabolomes of crop plants are affected more by environment than by genetics and that modification of plants with genetic engineering typically does not bring about off-target changes in the metabolome that would fall outside natural variation in the species. Baseline studies of the metabolomes (representing 156 metabolites in grain and 185 metabolites in forage) of 50 genetically diverse non-GE DuPont Pioneer commercial maize hybrids grown at six locations in North America revealed that the environment had a much greater effect on the metabolome (affecting 50 percent of the metabolites) than did the genetic background (affecting only 2 percent of the metabolites) the difference was more striking in forage samples than in grain samples (Asiago et al., 2012). Environmental factors were also shown to play a greater role than genetic engineering on the concentrations of most metabolites identified in Bt rice (Chang et al., 2012). In soybean, nontargeted metabolomics was used to demonstrate the dynamic ranges of 169 metabolites from the seeds of a large number of conventionally bred soybean lines representing the current commercial genetic diversity (Clarke et al., 2013). Wide variations in concentrations of individual metabolites were observed, but the metabolome of a GE line engineered to be resistant to the triketone herbicide mesotrione (which targets the carotenoid pathway that leads to photobleaching of sensitive plants) did not deviate with statistical significance from the natural variation in the current genetic diversity except in the expected changes in the targeted carotenoid pathway. Similar metabolomic approaches led to the conclusion that a Monsanto Bt maize was substantially equivalent to conventionally bred maize if grown under the same environmental conditions (Vaclavik et al., 2013) and that carotenoid-fortified GE rice was more similar to its parental line than to other rice varieties (Kim et al., 2013). Those studies suggest that use of metabolomics for assessing substantial equivalence will require testing in multiple locations and careful analysis to differentiate genetic from environmental effects, especially because there will probably be effects of gene&ndashenvironment interactions.

Some metabolomic and transcriptomic studies have suggested that transgene insertion or the tissue-culture process involved in regeneration of transformed plants can lead to &ldquometabolic signatures&rdquo associated with the process itself (Kusano et al., 2011 Montero et al., 2011). That was reported for GE tomatoes with overproduction of the taste-modifying protein miraculin, although it was pointed out by the authors that, as in comparable studies with other GE crops, &ldquothe differences between the transgenic lines and the control were small compared to the differences observed between ripening stages and traditional cultivars&rdquo (Kusano et al., 2011).

For metabolomics to become a useful tool for providing enhanced safety assessment of a specific GE crop, it will be necessary to develop a chemical library that contains all potential metabolites present in the species under all possible environmental conditions. It is a daunting task that may be feasible for a few major commodity crops under currently occurring biotic and abiotic stresses, but even that would not necessarily cover future environmental conditions. Annotated libraries of metabolites are unlikely to be developed for minor crops in the near future.

The Epigenome

Background

Whereas the DNA sequence of a gene encodes the mRNA that is translated into the corresponding protein, the rate at which a gene in the nucleus of a eukaryotic cell is transcribed into mRNA can be heavily influenced by chemical modification of the DNA of the gene and by chemical modification of the proteins associated with the DNA. In plants and other eukaryotes, genomic nuclear DNA can be chemically modified and is bound to an array of proteins in a DNA&ndashprotein complex termed chromatin. The major proteins in chromatin are histone proteins, which have an important role in regulating the accessibility of the transcriptional machinery to the gene and its promoter (regulatory region) and thereby control synthesis of mRNAs and proteins. Multiple types of histone proteins are found in plants, each with an array of post-translational modification (for example, acetylation and methylation) that can affect transcriptional competence of a gene. DNA can also be covalently modified by methylation of cytosines that affect transcriptional competence. Collectively, those modifications, which influence the expression of genes and are inheritable over various time spans, are known as epigenetic marks.

Epigenetic marks are determinants of transcriptional competence, and alteration of the epigenetic state (which occurs naturally but infrequently) can alter expression profiles or patterns of target genes. For example, when a transposable element inserts in or near a gene, the gene can be &ldquosilenced&rdquo

as regions near a transposon become highly methylated and transcription-ally suppressed owing to the activity of the cell&rsquos native RNA-mediated DNA methylation machinery. Different epigenetic marks occur naturally in crop species examples of transposable element-mediated gene silencing include allelic variation at the tomato 2-methyl-6-phytylquinol methyltransferase gene involved in vitamin E biosynthesis (Quadrana et al., 2014) and imprinting as seen in endosperm tissue, in which differential insertion of transposable elements occurs in the maternal and paternal parents (Gehring et al., 2009).

Methods of Characterizing the Epigenome

Methods of characterizing the epigenome are available and improving rapidly. For DNA methylation, high-throughput, single-nucleotide resolution can be obtained through bisulfite sequencing (BS-seq for review, see Feng et al., 2011 Krueger et al., 2012). BS-seq methods mirror that of genome resequencing except that the genomic DNA is first treated with bisulfite, which converts cytosines to uracils but does not affect 5-methyl-cytosine residues. As a consequence, nonmethylated cytosines will be detected as thymidines after the polymerase chain reaction step during epigenome-library construction. After sequencing, reads are aligned with a reference genome sequence, and nonmethylated cytosines are detected as SNPs and compared with a parallel library constructed from untreated DNA (see section above &ldquoResequencing: Assessing Differences Between the Reference and Query Genome&rdquo Figure 7-5). There are limitations of BS-seq approaches, such as incomplete conversion of cytosines, degradation of DNA, and an inability to assess the full methylome because of read mapping limitations, sequencing depth, and sequencing errors, as described above for resequencing. Another limitation is the dynamic nature of plant genome cytosine methylation. Plants derived from an identical parent that have not been subject to any traditional selection or GE transformation can have different epigenomes&mdashan example of &ldquoepigenetic drift&rdquo (Becker et al., 2011). Thus, determining the epigenome of a plant at one specific point in time will not necessarily indicate the future epigenome of offspring of that plant.

Histone marks can be detected through chromatin immunoprecipitation coupled with high-throughput sequencing (ChIP-Seq for review see Yamaguchi et al., 2014 Zentner and Henikoff, 2014). First, chromatin is isolated so that the proteins remain bound to the DNA. Then the DNA is sheared, and the DNA that is bound to specific histone proteins is selectively removed by using antibodies specific to each histone mark. The DNA bound to an antibody is then used to construct a library that is sequenced and aligned with a reference genome, and an algorithm is used to define the

regions of the genome in which the histone mark is found. Sensitivity and specificity of ChIP-Seq depend heavily on the specificity of the histone-mark antibodies, on technical limitations in alignment of sequence reads with the reference genome, and on the overall quality of the reference genome itself. Also, the present state of understanding does not permit robust prediction of the effects of many epigenetic modifications on gene expression, and gene expression can be more thoroughly and readily assessed by transcriptomics.

Evaluation of Crop Plants Using -Omics Technologies

The -omics evaluation methods described above hold great promise for assessment of new crop varieties, both GE and non-GE. In a tiered regulatory approach (see Chapter 9), -omics evaluation methods could play an important role in a rational regulatory framework. For example, consider the introduction of a previously approved GE trait such as a Bt protein in a new variety of the same species. Having an -omics profile in a new GE variety that is comparable to the profile of a variety already in use should be sufficient to establish substantial equivalence (Figure 7-6, Tier 1). Furthermore, -omics analyses that reveal a difference that is understood to have no adverse health effects (for example, increased carotenoid content) should be sufficient for substantial equivalence (Figure 7-6, Tier 2).

The approach described above could also be used across species. For example, once it is established that production of a protein (such as a Bt protein) in one plant species poses no health risk, then the only potential health risk of Bt expression in another species is unintended off-target effects. -Omics analyses that reveal no differences (Figure 7-6, Tier 1) or in which revealed differences present no adverse health effects (Figure 7-6, Tier 2) in comparison with the previously deregulated GE crop or the range of variation found in cultivated, non-GE varieties of the same species provide evidence for substantial equivalence. As discussed in Chapter 5 (see section &ldquoNewer Methods for Assessing Substantial Equivalence&rdquo), there have been more than 60 studies in which -omics approaches were used to compare GE and non-GE varieties, and none of these studies found differences that were cause for concern.

There are also scenarios for which -omics analyses could indicate that further safety testing is warranted, such as if -omics analyses reveal a difference that is understood to have potential adverse health effects (for example, increased expression of genes responsible for glycoalkaloid synthesis) (Figure 7-6, Tier 3). Another scenario is if -omics analyses reveal a change of a protein or metabolite for which the consequences cannot be interpreted and are outside the range observed in GE and non-GE varieties of the crop (Figure 7-6, Tier 4). It is important to note that a Tier 4 scenario is not in and of itself an indication of a safety issue. The functions

FIGURE 7-6 Proposed tiered crop evaluation strategy crops using -omics technologies.
SOURCE: Illustration by R. Amasino.
NOTE: A tiered set of paths can be taken depending on the outcome of the various -omics technologies. In Tier 1, there are no differences between the variety under consideration and a set of conventionally bred varieties that represent the range of genetic and phenotypic diversity in the species. In Tier 2, differences are detected that are well understood to have no expected adverse health or environmental effects. In Tiers 3 and 4, differences are detected that may have potential health or environmental effects and thus require further safety testing.

or health effects of consumption of many genes and corresponding RNAs, proteins, and metabolites in non-GE plants are not known. Furthermore, the chemical structure of many metabolites in plants that can be detected as &ldquopeaks&rdquo in various analytical systems is not known. Substantially more basic knowledge is needed before -omics datasets can be fully interpreted.

The state of the art of the different -omics approaches varies considerably. Advances in the efficiency of DNA-sequencing technology enable a complete genome or transcriptome to be sequenced at a cost that is modest on the scale of regulatory costs. Transcriptomics could play an important role in evaluation of substantial equivalence because it is relatively straightforward to generate and compare extensive transcriptomic data from multiple biological replicates of a new crop variety versus its already-in-use progenitor. As noted above, if no unexpected differences are found, this is evidence of substantial equivalence. It is possible that two varieties with equivalent transcriptomes have a difference in the level of a metabolite due to an effect of the product of a transgene on translation of a particular mRNA or on activity of a particular protein, but these are unlikely scenarios.

It is also straightforward and relatively low in cost to generate genome-sequence data from many individuals from a new GE or non-GE variety to determine which lineage has the fewest nontarget changes to its genome. As noted earlier in the chapter, mutagenesis, although currently classified as conventional breeding, can result in extensive changes to the genome thus generating DNA sequence data will be useful in evaluating varieties produced by this method.

Metabolomic and proteomic techniques cannot presently provide a complete catalog of the metabolome or proteome. Nevertheless, these -omics approaches can play a role in assessment. For example, a similar metabolome or proteome in a new variety compared to an existing variety provides supporting evidence of substantial equivalence, whereas a difference can indicate that further evaluation may be warranted.

The most thorough evidence of substantial equivalence would result from a complete knowledge of the biochemical constituents of one crop variety compared to other varieties. As noted above, that is not possible with present techniques for the proteome and metabolome. However, looking to the future, an increasing knowledge base of plant biochemistry will translate into fewer analyses that result in a Tier 4 situation, and basic research in plant biochemistry will continue to expand the knowledge base that will enable the thorough and rational evaluation of new crop varieties basic research will also expand fundamental understanding of basic biological processes in plants and thus enable advances in molecular plant breeding.

FINDING: Application of -omics technologies has the potential to reveal the extent of modifications of the genome, the transcriptome, the epigenome, the proteome, and the metabolome that are attributable to conventional breeding, somaclonal variation, and genetic engineering. Full realization of the potential of -omics technologies to assess substantial equivalence would require the development of extensive species-specific databases, such as the range of variation in the transcriptome, proteome, and metabolome in a number of genotypes grown in diverse environmental conditions. Although it is not yet technically feasible to develop extensive species-specific metabolome or proteome databases, genome sequencing and transcriptome characterization can be performed.

RECOMMENDATION: To realize the potential of -omics technologies to assess intended and unintended effects of new crop varieties on human health and the environment and to improve the production and quality of crop plants, a more comprehensive knowledge base of plant biology at the systems level (DNA, RNA, protein, and metabolites) should be constructed for the range of variation inherent in both conventionally bred and genetically engineered crop species.


How does biotechnology help us?

Satellite images make clear the massive changes that mankind has made to the surface of the Earth: cleared forests, massive dams and reservoirs, millions of miles of roads. If we could take satellite-type images of the microscopic world, the impact of biotechnology would be no less obvious. The majority of the food we eat comes from engineered plants, which are modified – either via modern technology or by more traditional artificial selection – to grow without pesticides, to require fewer nutrients, or to withstand the rapidly changing climate. Manufacturers have substituted petroleum-based ingredients with biomaterials in many consumer goods, such as plastics, cosmetics, and fuels. Your laundry detergent? It almost certainly contains biotechnology . So do nearly all of your cotton clothes .

But perhaps the biggest application of biotechnology is in human health. Biotechnology is present in our lives before we’re even born, from fertility assistance to prenatal screening to the home pregnancy test . It follows us through childhood, with immunizations and antibiotics, both of which have drastically improved life expectancy. Biotechnology is behind blockbuster drugs for treating cancer and heart disease, and it’s being deployed in cutting-edge research to cure Alzheimer’s and reverse aging. The scientists behind the technology called CRISPR/Cas9 believe it may be the key to safely editing DNA for curing genetic disease. And one company is betting that organ transplant waiting lists can be eliminated by growing human organs in chimeric pigs .


A DNA-binding platform that can be customized to bind to a specific DNA sequence and introduce a DSB in this targeted manner.

DNA sequences that can be translocated within the genome by transposase proteins.

A cytosine residue directly followed by a guanine residue in a DNA strand. Cytosine residues in CpG sites can be directly methylated by DNA methyltransferase.

Synthetic modified oligomers that are capable of sterically inhibiting translation of specific RNAs in a targetable manner.



Comments:

  1. Dukasa

    Bravo, brilliant idea

  2. Windham

    Bravo, is simply excellent idea

  3. Yozshugami

    Wonderful, very precious phrase

  4. Thai

    What words... super, a brilliant idea

  5. JoJonris

    Excuse, that I can not participate now in discussion - there is no free time. I will return - I will necessarily express the opinion on this question.



Write a message