Monday, September 20, 2010

Addiction: Could it be a big lie?


Go ahead. Judge drug addicts. Call them selfish. A Harvard psychologist gives you permission.

You do not have to be mean to them, he says. Just do not treat them as if they have some sort of, you know, illness.

Gene M. Heyman's book Addiction: A Disorder of Choice comes out this week. Like several other books released this decade, it disparages the overwhelming scientific consensus that addiction is an involuntary disease. Supporters of the overwhelming scientific consensus are not amused.

"His argument crashes and burns," says Tony George, the head of addiction psychiatry at the University of Toronto. "I don't think there's too many self-respecting scholars in the addiction field who would agree with him. I'm shocked that Harvard University Press would publish that."

"These guys – I don't know, academia, they just kind of take what they want, and they don't care about the truth, or what the studies show," says Norman Miller, a professor of medicine at Michigan State University.

"What aspect of disease," says Norman Hoffman, a psychology professor at Western Carolina University, "does he not understand?"

Heyman, a Harvard lecturer in psychology, did not expect to be lauded by the medical-scientific establishment. His book indicts its members. Appealing to an eclectic mix of studies and examples – Philip Roth's impotent alter ego Nathan Zuckerman makes a brief appearance – he attempts to persuade us that we have been persistently deceived by so-called addiction experts who do not understand addiction.

If hardly a controversial topic to those other than the small group of dissidents who want it to be, the semantic disease-or-not debate has important practical implications. How addiction is viewed affects how addicts are treated, by the public and by medical professionals, and how government allocates resources to deal with the problem. Heyman, who says he was once reluctant to share his conclusions, now makes his case forcefully.

Can humans be genetically predisposed to addiction? Sure, he writes, but this does not mean addicts' drug use is not a voluntary behaviour. Are addicts self-destructive? Of course, he writes, but this does not mean they do not respond to the costs and benefits associated with their decisions, even when addiction has changed their brains. Is addiction a chronic, lifelong disorder? No, he concludes. Most experts, he argues, do not understand just how many addicts quit for good.

Addiction draws heavily on behavioural economics, a field that fuses psychology with economic theory to predict human behaviour. The book is complex.

It is fundamentally based, however, on that last, simple point: Addicts quit. Clinical experts believe addiction cannot be permanently conquered, Heyman writes, because they tend to study only addicts who have entered treatment programs. People who never enter treatment – more than three-quarters of all addicts, according to most estimates – relapse far less frequently than those who do, since people in treatment more frequently have additional medical and psychiatric problems.

Miller says Heyman has misinterpreted the data to which he points. George says studies of non-treatment-seeking people contradict Heyman's conclusions. Says Hoffman about those conclusions: "Yeah, so?"

Many addicts, Hoffman agrees, can indeed quit of their own volition. But some people live long lives with cancer. This is not proof that cancer is not a disease, he says, merely that some people suffer from more severe cases of diseases than others.

"If you compare Type 2 diabetes to Type 1 diabetes, one is much more virulent, more difficult to control. But we call them the same; we call them both diabetes," he says. "Since we're talking about a plethora of genes involved in addiction, we may also be looking here at a variety of illnesses that we're labelling the same but are really very different."

Heyman concurs with the expert consensus on the nature of addicts' thinking at the time of a relapse. The addict, he writes, does not choose to be an addict; he or she merely chooses to use the drug one more time, nothing more, and thus ends up an addict unintentionally.

The question is why the addict chooses to use the drug one more time. "The evidence from neuroimaging, animal studies, genetic association studies, clinical trials, is overwhelming," says George: The addicted brain is a changed brain. It is simply incapable of resisting a desired drug. But Heyman argues that addicts with sufficient self-control can organize their lives so that they are not directly confronted with an abstain-or-succumb decision.

People who have stronger incentives to remain clean, such as a good job, are more likely to make better lifestyle choices, Heyman writes. This is not contentious. But he also argues that the inability to resist potentially harmful situations is a product of others' opinions, fear of punishment, and "values"; it is a product of a cost-benefit analysis.

He does not dispute that drug use alters the brain. He does not dispute that some people have genes that make them more susceptible to addiction. He disputes that the person who is predisposed to addiction and the person whose brain has been altered are not able to ponder the consequences of their actions. In other words, he disputes that biological factors make addicts' decisions compulsive.

This is where the experts he maligns begin to grumble again. In the changed brains of many addicts, says George, the capacity for voluntary behaviour with regard to drugs has been overwhelmed. It is as if the brakes that might allow them to stop before using have ceased functioning.

While addicts may not ignore the consequences of their actions, many – even people with families, good jobs and a lot to lose – are unable to make those consequences the basis for their actions.

"Where (Heyman) loses the argument," George says, "is that there are clearly both biological and environmental or contextual factors involved, but he's basically saying that the context and the environment are everything and the biology is irrelevant. Well, what we know about the brain, and the brain on drugs, is startling."

Heyman knows he is a heretic. The book jacket on Addiction calls his thoughts "radical"; in the book, he writes that "most people believe the disease interpretation of addiction is the scientific, enlightened, and humane perspective." Changing minds will be difficult.

Then again, some people manage to quit drugs.

Thursday, September 9, 2010

"Beam me up, Scotty"could soon be a reality


The catch-phrase "Beam me up, Scotty" of the iconic "Star Trek" serial could be close to reality with scientists successfully teleporting objects from one place to another with the help of energy rays.

A team of scientists at the Australian National University in Canberra, using tractor beams - rays that can move objects - have managed to shift tiny particles up to 59 inches from one spot to another.

Researcher Andrei Rhode said his team's technique can move objects 100 times bigger over a distance of almost five feet, reports the Daily Mail.

The method involves shining a hollow laser beam around tiny glass particles which heats up the air around them, but the centre of the beam which strikes the particles stays cool resulting in their being drawn towards the beam's warm edges.

However, the heated air molecules that are bouncing around strike the surface of the glass particles and nudge them back to the cooler centre.

Rhode explained that by using two laser beams, the particles can be manipulated to move in different directions.

"We think the technique could work over even longer distances than those we've tested. With the particles and the laser we use, I would guess up to 10 metres (about 33ft)," he said.

The maximum distance he and his team could achieve was limited by the lab equipment.

But he said that unlike the beams in Star Trek, his technique would not work in outer space, where there is a vacuum.

"On Earth, though, there are many possible applications, such as being able to move dangerous substances and microbes."

Tuesday, September 7, 2010

God did not create the universe, says Hawking


God did not create the universe and the "Big Bang" was an inevitable consequence of the laws of physics, the eminent British theoretical physicist Stephen Hawking argues in a new book.

In "The Grand Design," co-authored with U.S. physicist Leonard Mlodinow, Hawking says a new series of theories made a creator of the universe redundant, according to the Times newspaper which published extracts on Thursday.

"Because there is a law such as gravity, the universe can and will create itself from nothing. Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist," Hawking writes.

"It is not necessary to invoke God to light the blue touch paper and set the universe going."

Hawking, 68, who won global recognition with his 1988 book "A Brief History of Time," an account of the origins of the universe, is renowned for his work on black holes, cosmology and quantum gravity.

Since 1974, the scientist has worked on marrying the two cornerstones of modern physics -- Albert Einstein's General Theory of Relativity, which concerns gravity and large-scale phenomena, and quantum theory, which covers subatomic particles.

His latest comments suggest he has broken away from previous views he has expressed on religion. Previously, he wrote that the laws of physics meant it was simply not necessary to believe that God had intervened in the Big Bang.

He wrote in A Brief History ... "If we discover a complete theory, it would be the ultimate triumph of human reason -- for then we should know the mind of God."

In his latest book, he said the 1992 discovery of a planet orbiting another star other than the Sun helped deconstruct the view of the father of physics Isaac Newton that the universe could not have arisen out of chaos but was created by God.

"That makes the coincidences of our planetary conditions -- the single Sun, the lucky combination of Earth-Sun distance and solar mass, far less remarkable, and far less compelling evidence that the Earth was carefully designed just to please us human beings," he writes.

Hawking, who is only able to speak through a computer-generated voice synthesizer, has a neuro muscular dystrophy that has progressed over the years and left him almost completely paralyzed.

He began suffering the disease in his early 20s but went on to establish himself as one of the world's leading scientific authorities, and has also made guest appearances in "Star Trek" and the cartoons "Futurama" and "The Simpsons."

Last year he announced he was stepping down as Cambridge University's Lucasian Professor of Mathematics, a position once held by Newton and one he had held since 1979.

"The Grand Design" is due to go on sale next week.

Saturday, September 4, 2010

The Beginning Of Universe


I'm on record as predicting that we'll understand what happened at the Big Bang within fifty years. Not just the “Big Bang model” — the paradigm of a nearly-homogeneous universe expanding from an early hot, dense, state, which has been established beyond reasonable doubt — but the Bang itself, that moment at the very beginning. So now is as good a time as any to contemplate what we already think we do and do not understand. (Also, I'll be talking about it Saturday night on Coast to Coast AM, so it’s good practice.)

There is something of a paradox in the way that cosmologists traditionally talk about the Big Bang. They will go to great effort to explain how the Bang was the beginning of space and time, that there is no “before” or “outside,” and that the universe was (conceivably) infinitely big the very moment it came into existence, so that the pasts of distant points in our current universe are strictly non-overlapping. All of which, of course, is pure moonshine. When they choose to be more careful, these cosmologists might say “Of course we don't know for sure, but…” Which is true, but it's stronger than that: the truth is, we have no good reasons to believe that those statements are actually true, and some pretty good reasons to doubt them.

I'm not saying anything avant-grade here. Just pointing out that all of these traditional statements about the Big Bang are made within the framework of classical general relativity, and we know that this framework isn’t right. Classical GR convincingly predicts the existence of singularities, and our universe seems to satisfy the appropriate conditions to imply that there is a singularity in our past. But singularities are just signs that the theory is breaking down, and has to be replaced by something better. The obvious choice for “something better” is a sensible theory of quantum gravity; but even if novel classical effects kick in to get rid of the purported singularity, we know that something must be going on other than the straightforward GR story.

There are two tacks you can take here. You can be specific, by offering a particular model of what might replace the purported singularity. Or you can be general, trying to reason via broad principles to argue about what kinds of scenarios might ultimately make sense.

Many scenarios have been put forward among the “specific” category. We have of course the “quantum cosmology” program, that tries to write down a wave function of the universe; the classic example is the paper by Hartle and Hawking. There have been many others, including recent investigations within loop quantum gravity. Although this program has led to some intriguing results, the silent majority or physicists seems to believe that there are too many unanswered questions about quantum gravity to take seriously any sort of head-on assault on this problem. There are conceptual puzzles: at what point does space time make the transition from quantum to classical? And there are technical issues: do we really think we can accurately model the universe with only a handful of degrees of freedom, crossing our fingers and hoping that unknown ultraviolet effects don't completely change the picture? It's certainly worth pursuing, but very few people (who are not zero-gravity tourists) think that we already understand the basic features of the wave function of the universe.

At a slightly less ambitious level (although still pretty darn ambitious, as things go), we have attempts to “smooth out” the singularity in some semi-classical way. Aguirre and Gratton have presented a proof by construction that such a universe is conceivable; essentially, they demonstrate how to take an inflating space time, cut it near the beginning, and glue it to an identical space time that is expanding the opposite direction of time. This can either be thought of as a universe in which the arrow of time reverses at some special midpoint, or (by identifying events on opposite sides of the cut) as a one-way space time with no beginning boundary. In a similar spirit, Gott and Li suggest that the universe could “create itself,” springing to life out of an endless loop of closed time like curves. More colorfully, “an inflationary universe gives rise to baby universes, one of which turns out to be itself.

And of course, you know that there are going to be ideas based on string theory. For a long time Veneziano and collaborators have been studying what they dub the per-Big-Bang scenario. This takes advantage of the scale-factor duality of the stringy cosmological field equations: for every cosmological solution with a certain scale factor, there is another one with the inverse scale factor, where certain fields are evolving in the opposite direction. Taken literally, this means that very early times, when the scale factor is nominally small, are equivalent to very late times, when the scale factor is large! I'm skeptical that this duality survives to low-energy physics, but the early universe is at high energy, so maybe that's irrelevant. A related set of ideas have been advanced by Steinhardt, Turok, and collaborators, first as the ekpyrotic scenario and later as the cyclic universe scenario. Both take advantage of branes and extra dimensions to try to follow cosmological evolution right through the purported Big Bang singularity; in the ekpyrotic case, there is a unique turnaround point, whereas in the cyclic case there are an infinite number of bounces stretching endlessly into the past and the future.

Personally, I think that the looming flaw in all of these ideas is that they take the homogeneity and isotropy of our universe too seriously. Our observable patch of space is pretty uniform on large scales, it’s true. But to simply extrapolate that smoothness infinitely far beyond what we can observe is completely unwarranted by the data. It might be true, but it might equally well be hopelessly parochial. We should certainly entertain the possibility that our observable patch is dramatically unrepresentative of the entire universe, and see where that leads us.



Inflation makes it plausible that our local conditions don't stretch across the entire universe. In Alan Guth’s original scenario, inflation represented a temporary period in which the early universe was dominated by false-vacuum energy, which then went through a phase transition to convert to ordinary matter and radiation. But it was eventually realized that inflation could be eternal — unavoidable quantum fluctuations could keep inflation going in some places, even if it turns off elsewhere. In fact, even if it turns off “almost everywhere,” the tiny patches that continue to inflate will grow exponentially in volume. So the number of actual cubic centimeters in the inflating phase will grow without bound, leading to eternal inflation. Andrei Linde refers to such a picture as self-reproducing.

If inflation is eternal into the future, maybe you don’t need a Big Bang? In other words, maybe it's eternal into the past, as well, and inflation has simply always been going on? Borde, Guth and Vilenkin proved a series of theorems purporting to argue against that possibility. More specifically, they show that a universe that has always been inflating (in the same direction) must have a singularity in the past.

But that's okay. Most of us suffer under the vague impression — with our intuitions trained by classical general relativity and the innocent-sounding assumption that our local uniformity can be straightforwardly extrapolated across infinity — that the Big Bang singularity is a past boundary to the entire universe, one that must somehow be smoothed out to make sense of the per-Bang universe. But the Bang isn't all that different from future singularities, of the type we're familiar with from black holes. We don't really know what's going on at black-hole singularities, either, but that doesn't stop us from making sense of what happens from the outside. A black hole forms, settles down, Hawking-radiates, and eventually disappears entirely. Something quasi-singular goes on inside, but it's just a passing phase, with the outside world going on its merry way.

The Big Bang could have very well been like that, but backwards in time. In other words, our observable patch of expanding universe could be some local region that has a singularity (or whatever quantum effects may resolve it) in the past, but is part of a larger space in which many past-going paths don't hit that singularity.

The simplest way to make this work is if we are a baby universe. Like real-life babies, giving birth to universes is a painful and mysterious process. There was some early work on the idea by Farhi, Guth and Guven, as well as Fischler, Morgan and Polchinski, which has been followed up more recently by Aguirre and Johnson. The basic idea is that you have a background space time with small (or zero) vacuum energy, and a little sphere of high-density false vacuum. (The sphere could be constructed in your secret basement laboratory, or may just arise as a thermal fluctuation.) Now, if you're not careful, the walls of the sphere will simply implode, leaving you with some harmless radiation. To prevent that from happening, you have two choices. One is that the size of the sphere is greater than the Hubble radius of your universe — in our case, more than ten billion light years across, so that's not very realistic. The other is that your sphere is not simply embedded in the background, it's connected to the rest of space by a “wormhole” geometry. Again, you could imagine making it that way through your wizardry in gravitational engineering, or you could wait for a quantum fluctuation. Truth is, we're not very clear on how feasible such quantum fluctuations are, so there are no guarantees.

But if all those miracles occur, you're all set. Your false-vacuum bubble can expand from a really tiny sphere to a huge inflating universe, eventually reheating and leading to something very much like the local universe we see around us today. From the outside, the walls of the bubble appear to collapse, leaving behind a black hole that will eventually evaporate away. So the baby universe, like so many callous children, is completely cut off from communication with its parent. (Perhaps “teenage universe” would be a more apt description.)

Everyone knows that I have a hidden agenda here, namely the arrow of time. The thing we are trying to explain is not “why was the early universe like that?”, but rather “why was the history of universe from one end of time to the other like that?” I would argue that any scenario that purports to explain the origin of the universe by simply invoking some special magic at early times, without explaining why they are so very different from late times, is completely sidestepping the real question. For example, while the cyclic-universe model is clever and interesting, it is about as hopeless as it is possible to be from the point of view of the arrow of time. In that model, if we knew the state of the universe to infinite precision and evolved it backwards in time using the laws of physics, we would discover that the current state (and the state at every other moment of time) is infinitely finely-tuned, to guarantee that the entropy will decrease monotonically forever into the past. That's just asserting something, not explaining anything.

The baby-universe idea at least has the chance to give rise to a spontaneous violation of time-reversal symmetry and explain the arrow of time. If we start with empty space an evolve it forward, baby universes can (hypothetically) be born; but the same is true if we run it backwards. The increase of entropy doesn't arise from a fine-tuning at one end of the universe's history, it's a natural consequence of the ability of the universe to always increase its entropy. We're a long way from completely understanding such a picture; ultimately we'll have to be talking about a Hilbert space of wave functions that involve an infinite number of disconnected components of space time, which has always been a tricky problem. But the increase of entropy is a fact of life, right here in front of our noses, that is telling us something deep about the universe on the very largest scales.

Update: On the same day I wrote this post, the cover story at New Scientist by David Shiga covers similar ground. Sadly, subscription-only, which is no way to run a magazine. The article also highlights the Banks-Fischer holographic cosmology proposal.

Thursday, September 2, 2010

The Next Generation Of Wireless Telephony


Europe has witnessed in recent years a massive growth in mobile communications, ranging from the more traditional analogue based systems to the current generation of digital systems such as GSM (Global System for Mobile Communications), DCS-1800 (Digital Communication System at 1800 MHz), ERMES (European Radio Messaging System), and to a lesser extent DECT (Digital European Cordless Telephone), and TETRA (Trans European Truncked Radio). The GSM family of products (GSM + DCS-1800), which represents the first large scale deployment of commercial digital cellular system ever, enjoys world wide success, having already been adopted by over 190 operators in more than 80 countries. In a very short period of time, the percentage of European cellular subscribers using GSM or DCS-1800 has already exceeded 50%. In addition, the figure portrays the penetration rates of the combined analogue and digital cellular systems for the same time frame. It is worth noticing that the biggest markets of Europe in terms of subscribers (i.e., UK, Italy and Germany) are not the markets with the largest penetration rates. In this respect, the largest penetration rates are found in the Nordic countries, close to or even exceeding 25% of the population.

Third Generation systems and technologies are being actively researched world wide. In Europe, such systems are commonly referred under the name UMTS (Universal Mobile Telecommunications Systems) while internationally, and particularly in the ITU context, they are referred to as FPLMTS (Future Public Land Mobile Telecommunications Systems) or more recently IMT-2000 (International Mobile Telecommunications for the year 2000).

In this context, but also in a world wide perspective, with many competing mobile and personal communication technologies and standards being proposed to fulfill the users needs, the essential questions, to which no immediate, conclusive, firm answers can be given, are: To what extent, and how fast, will the users' requirements evolve beyond the need for voice and low data rate communications?, and which will be the technologies that will meet the requirements for mobile and personal communications services and applications beyond the year 2000?.

The rapid advance of component technology; the pressure to integrate fixed and mobile networks; the developments in the domains of service engineering, network management and intelligent networks; the desire to have multi-application hand-held terminals; and above all the increasing scope and sophistication of the multimedia services expected by the customer; all demand performance advances beyond the capability of second generation technology. The very success of second generation systems in becoming more cost effective and increasingly cost attractive raises the prospect that it will reach an early capacity and service saturation in Europe's major conurbations. These pressures will lead to the emergence of third generation systems representing a major opportunity for expansion of the global mobile marketplace rather than a threat to current systems and products.

The ground work for UMTS started in 1990, and some early answers can already be provided regarding its requirements, characteristics and capabilities, with the initial standards development process already under way at ETSI (European Telecommunications Standards Institute). The basic premise upon which work is being carried out, is that by the turn of the century, the requirements of the mobile users will have evolved and be commensurate with those services and applications that will be available over conventional fixed or wireline networks. The citizen in the third millennium will wish to avail himself of the full range of broadband multimedia services provided by the global information highway, whether wired or wireless connected.

Various international forums have raised the issue of technology migration from Second to Third Generation via the use of spectrum in the FPLMTS/UMTS bands. This may result in the spectrum being allocated, in some parts of the world, in an inefficient piecemeal fashion to evolved Second Generation technologies and potentially many new narrow-application systems, thereby impeding the development of broadband mobile multimedia services.

Terminal, system and network technology as researched within the EU-funded ACTS projects, may alleviate to a large extent the complexity of the sharing of the spectrum between the Second and Third Generation systems. Finding the solution to the problem of evolution and migration path from Second (GSM, DCS-1800, DECT) to Third Generation systems (FPLMTS/UMTS), particularly from a service provision point of view, is also the subject of intense research carried out in the context of ACTS projects. Some of the key questions that are addressed include a detail consideration of the feasibility, as well as the cost effectiveness and attractiveness of the candidate enhancements. In this context, the ACTS projects will develop a set of guidelines aiming at reducing the uncertainties and associated investment risks regarding the new wireless technologies, by providing the sector actors and the investment community with clear perspectives on the technological evolution and on the path to the timely availability to the user of advanced services and applications.

In response to the imperatives of the internal European market, specific measures were taken, as early as 1987, to promote the Union-wide introduction of GSM, DECT, and ERMES. European Council Directives were adopted to set out common frequency bands to be allocated in each Member State to ensure pan-European operation, together with European Council Recommendations promoting the co-ordinated introduction of services based on these systems.

In 1994, the European Commission adopted a Green Paper on Mobile and Personal Communications with the aim of establishing the framework of the future policy in the field of mobile and personal communications. The Green Paper proposed to adapt, where necessary, the telecommunications policy of the European Union to foster a European-wide framework for the provision of mobile infrastructure, and to facilitate the emergence of trans-European mobile networks, services, and markets for mobile terminals and equipment.

Based on the Green Paper, the European Commission set out general positions on the future development of the mobile and personal sector, and defined an action plan which included actions to pursue the full application of competition rules; the development of a Code of Conduct for service providers; and the agreement on procedures for licensing of satellite-based personal communications. The action plan also advocated the possibility of allowing service offerings as a combination of fixed and mobile networks in order to facilitate the full-scale development of personal communications; the lifting of constraints on alternative telecommunications infrastructures and constraints on direct interconnection with other operators; the adoption and implementation of Decisions of the ERC (European Radio-communications Committee) on frequency bands supporting DCS-1800 and TETRA; the opening up of an Europe-wide Numbering Space for pan-European services including personal communications services; and continuing support of work towards UMTS. 

The combination of these regulatory changes will contribute to a substantial acceleration of the EU's mobile communications market and speed the progress towards Third Generation mobile/personal communications. It will however be necessary to encourage potential operators and manufacturers to invest in the required technology, by setting out a clear calendar for the adoption of the required new standards and the re-farming of the necessary spectrum. The applicable licensing regimes and rules for flexible sharing of the available spectrum need also to be adopted at an early stage so as to permit the identification of novel market opportunities commensurate with the broadband multimedia requirements of the Third Generation mobile telecommunications systems.

In light of the above, and in accordance with the political mandate given by the European Parliament and the European Council, the major actors in the mobile and personal communications sector have been brought together as a task force which has lead to the setting up of the UMTS Forum. The main objective of the Forum are to contribute to the elaboration of an European policy for mobile and personal communications based on an industry wide consensus view, and pave the way for ensuring that mobile communications will play a pivotal role in the Global Information Society.

The continued evolution of Second Generation systems has been recognized as an issue of great societal and economic importance for Europe and the European industry. To facilitate and crystallize such an ambition, and in accordance with the political mandate given by the European Parliament and the European Council, an ad-hoc group called the UMTS Task Force was convened by the European Commission and was charged with the task of identifying Europe's mobile communications strategy towards UMTS. The report of the UMTS Task Force and its recommendations have been largely endorsed by the European mobile industry, and as a result the UMTS Forum has now been created with the mandate to provide an on-going high level strategic steer to the further development of European mobile and personal communications technologies. High on the priorities of the UMTS Forum are the issues of technology, spectrum, marketing and regulatory regimes. Drawing participation beyond the European industry, the UMTS Forum is expected to play an important role in bringing into commercial reality the UMTS vision.

Wednesday, September 1, 2010

The Next Generation Internet


By now, anyone who reads the morning paper has probably heard that the Internet will be an even bigger deal in the future than it is today. School children will access all the great works of literature ever written with the click of a mouse, surgery will be performed via cyberspace, all transactions with the government will be conducted via your personal computer, making bureaucratic line ups a thing of the past.

Sound too good to be true? Much of what has been written about two buzzword initiatives, Internet2 (I2) and the Next Generation Internet (NGI), would lead one to believe that these scenarios are just around the corner.

And some may be. Already in the works are projects to split the spectrum of light traveling the Internet's optical networks, allowing high priority traffic to pass at the highest and least interrupted frequency, while passing low priority traffic (i.e. your e-mail) along at a lower frequency. Teleinstrumentation the remote operation of such rare resources as satellites and electron microscopes has been demonstrated. Digital libraries containing environmental data have been used to simulate natural and man made disasters for emergency response teams. Classrooms and entire universities have gone online, making remote education an option for students.

But misconceptions about I2 and NGI abound, first and foremost that they are interchangeable terms for the same project, closely followed by the perception that the government is hard at work right now digging trenches and laying cable for what is to be a brand new Internet.

I2 and NGI are separate and distinctly different initiatives. It's easiest to think of them as two different answers to the same plaguing problem. The problem is congestion on the commercially available Internet.

The need for a new Internet

Prior to 1995, the National Science Foundation's (NSF) NSFnet served the research and academic community and allowed for cross country communications on relatively unclogged T3 (45 megabit per second) lines that were unavailable for commercial use. However, NSFnet went public in 1995, setting the stage for today's Internet. As the Internet has become irrevocably a part of life, the increase in e-mail traffic and the proliferation of graphically dense pages have eaten up valuable bandwidth.

With all of this data congealing in cyberspace, for the Internet currently knows no differentiation between a Web site belonging to Arthur Andersen or Pamela Anderson there has arisen a critical need for a new Internet. The answers to the questions for what purpose and for who's use vary depending upon the proposed solution.

Internet2: The bottom-up initiative

Internet2 is the university community's response to the need for a return to dedicated bandwidth for academic and research use exclusively. Currently, about 120 universities and 25 corporate sponsors are members of Internet2, which in October 1997 incorporated itself forming the University Corporation for Advanced Internet Development (UCAID).

UCAID now serves as the support and administrative organisation for the project known as Internet2. Members pay an annual fee of between $10,000 and $25,000 and must demonstrate that they are making a definitive, substantial, and continuing commitment to the development, evolution, and use of networking facilities and applications in the conduct of research and education before they are approved for membership.

Internet2 represents the interests of the academic community through its concentration on applications that require more bandwidth and end to end quality of service than is available relying upon the commercial Internet. I2 is focused upon the needs of academia first, but is expected to develop technologies and applications that will eventually make their way into the rest of society.

The vBNS: A prototype for both Internets

The vBNS (very-high-performance Backbone Network Service), a project of the National Science Foundation and MCI Telecommunications, is a nationwide network that supports high performance, high bandwidth research applications. Like the old NSFnet, vBNS is a closed network, available only to the academic and research community. Currently it connects 46 academic institutions across the country, though a total of 92 have been approved for connectivity. A component of the vBNS project is research into high speed networking and communications and transfer of this data to the broader networking community. In many ways, the vBNS is the prototype for both I2 and NGI. The kinds of applications that both I2 and NGI would like to foster are currently deployed on this network.

Since its formation in 1996, I2 has concentrated on defining the environment where I2-type applications will run, holding member meetings and demonstrations where developers express programming needs and innovations that will be incorporated into a set of network tools that do not currently exist. One such meeting is scheduled to be held later this month at the Highway 1 technology forum in Washington, D.C.

I2 member meetings also provide a forum for researchers to investigate trends that will contribute to the applications environment, including object oriented programming, software componentisation, object request brokering, dynamic run time binding, multitiered applications delivery with separation of data, and presentation functions.

Internet2 also continues to define its relationship with the other Internet initiative, Next Generation Internet, at the same time as NGI explores how best to apply the experience and expertise of the I2 community to its task. While acknowledging their differences, in statements each initiative positions its relationship to the other, determining where the line between the two could or should be drawn and what benefit each brings to the other's agenda.

The NGI roadmap


The NGI initiative is divided into three progressive stages, called goals in NGI parlance. Goal 1 is underway now, Goal 3 is targeted for the end of next year.

Goal 1 calls for NGI to research, develop, and experiment with advanced network technologies that will provide dependability, diversity in classes of service, security, and realtime capability for such applications as wide area distributed computing, teleoperation, and remote control of experimental facilities. In this first phase, the project, led by the Defense Advanced Research Projects Agency (DARPA) - will set the stage for the technologies, applications, and test beds envisioned for Goals 2 and 3.

Goal 2 led by the NSF constructs the actual NGI networks and also depends heavily upon the vBNS. NGI expects that Goal 1 development will, by this point, have overcome the speed bumps of incompatible performance capabilities and service models in switches, routers, local area networks, and workstations. In Goal 2, 100 sites (universities, federal research institutions, and other research partners) will be connected at speeds in excess of 100 times that of today's Internet.

As with I2, the vBNS would serve as a backbone for the network connecting NGI participants. To bring in other research partners and provide additional connectivity, the vBNS would interconnect to other federal research networks, including DREN (Defense), NREN (NASA), ESnet (DoE), and eventually SuperNet (DARPA's terabyte research network). The vBNS would also serve as a base for interconnecting to foreign high-performance networks, including the Canadian CA*net II, and others routed through the Science, Technology, and Research Transit Access Point (STAR-TAP) in Chicago.

Goal 2 of the NGI project also has the most planned collaboration with Internet2. NGI officials foresee the NSF supporting the GigaPoPs that would interconnect the I2 institutions and coordinating I2 and NGI interconnectivity to support interoperability and shared experimentation with NGI technologies and applications.

The Internet speed comes in the second, high-risk, high-security, test bed planned for the second phase of Goal 2. In this phase, 10 sites will be connected on a network employing ultra high speed switching and transmission technologies and end to end network connectivity at more than 1 gigabit per second, approximately 1000 times faster than today's Internet. This 1 gigabit per second network is intended to provide the research base for an eventual Terabyte per second network that would employ NGI conceived and developed technologies for harnessing such speed. A 1 Terabyte per second network additionally takes advantage of optical technology pioneered by DARPA.

The impossible becomes commonplace

The current Internet exploded once it was opened up for commercial use and privatisation. Both I2 and NGI include industry as part of their advisory and actual or envisioned working teams, a nod to the future when the technologies or applications developed within either initiative, be they Terabyte per second networks, quality of service tools, digital libraries, or remote collaboration environments, are ready for and applicable to the market place.

On today's Internet it sometimes takes many seconds to get one picture, while on tomorrow's Internet, you're going to get many pictures in one second. This means high definition video, such as that being used now for scientific visualisation. It's only a matter of time until industry seizes upon and spins this technology off into other worlds of interest to folks outside the sciences, like the entertainment industry.

Both initiatives have obstacles before them, I2 depends upon academic resources and investment, and NGI relies on Congressional budgets and endorsement.

Still, there is cautious hope within their respective communities that I2 and NGI can create not a new Internet, but a new Internet environment.

Molecular Switches

The world of molecular computing, with its ultrafast speeds, low power needs and inexpensive materials, is one step closer to reality. Using chemical processes rather than silicon based photolithography, researchers at Rice University and Yale University in the US have created a molecular computer switch with the ability to be turned on and off repeatedly.

Such a switch, or logic gate, is a necessary computing component, used to represent ones and zeros, the binary language of digital computing.

As far as building the basic components of molecular computing is concerned, 50 percent of the job is done, the other 50 percent is memory. Rice and Yale researchers plan to announce a molecular memory device soon.

The cost of the molecular switches would be at least several thousand times less expensive than traditional solid state devices. They also promise continued miniaturisation and increased computing power, leapfrogging the limits of silicon.

The switch works by applying a voltage to a 30 nanometer wide self assembled array of the molecules, allowing current to flow in only one direction within the device. The current only flows at a particular voltage, and if that voltage is increased or decreased it turns off again making the switch reversible. In other previous demonstrations of a molecular logic gate there was no reversibility.

In addition the difference in the amount of current that flows in the on/off state, known as the peak to valley ratio is 1000 to 1. The typical silicon device response is at best, 50 to 1. The dramatic response from off to on when the voltage is applied indicates the increased reliability of the signal.

The active electronic compound, 2'-amino-4-ethynylphenyl-4'-ethynylphenyl-5'-nitro-1-benzenethiol, was designed and synthesised at Rice. The molecules are one million times smaller in area than typical silicon-based transistors.

Not only is it much smaller than any switch that you could build in the solid state, it has complementary properties, which in this case if you want a large on/off ratio it blows silicon away.

The measurements of the amount of current passing through a single molecule occurred at a temperature of approximately 60 Kelvin, or about -350 degrees Fahrenheit.

In addition to logic gates, potential applications include a variety of other computing components, such as high frequency oscillators, mixers and multipliers.

It really looks like it will be possible to have hybrid molecular and silicon based computers within five to 10 years.