Quantum Computing Is Going to Change the World. Here’s What This Means for You.

The science and tech world has been abuzz about quantum computers for years, but the devices are not yet affecting our daily lives. Quantum systems could seamlessly encrypt data, help us make sense of the huge amount of data we’ve already collected, and solve complex problems that even the most powerful supercomputers cannot – such as medical diagnostics and weather prediction.

That nebulous quantum future became one step closer this November, when top­-tier journal Nature published two papers that showed some of the most advanced quantum systems yet.

If you still don’t understand what a quantum computer is, what it does, or what it could do for you, never fear. Futurism recently spoke with Mikhail Lukin, a physics professor at Harvard University and the senior author of one of those papers, about the current state of quantum computing, when we might have quantum technology on our phones or our desks, and what it will take for that to happen.

This interview has been slightly edited for clarity and brevity.

Futurism: First, can you give me a simple explanation for how quantum computing works?

Mikhail Lukin: Let’s start with how classical computers work. In classical computers, you formulate any problem you want to solve in the form of some input, which is basically a stream of 0s and 1s. When you want to do some calculation, you basically create a certain set of rules depending on how this stream should actually move. That’s the process of calculation — addition, multiplication, whatever.

But we’ve known for more than 100 years that our microscopic world is fundamentally quantum mechanical. And in quantum mechanics, you can have systems. Your computer, for instance, or your chair can be placed in two different states at once — that’s the idea of quantum superpositions. In other words, your computer can be simultaneously both in Boston and in New York. So this quantum superposition, even though it sounds very weird, is allowed by the laws of quantum mechanics. On a large scale, like the example that I gave, it is clearly very strange. But in the microscopic world, like with a single atom, creating this kind of superposition state is actually quite common. So by doing these scientific experiments, scientists proved that a single atom is in two different states at once.

The idea of quantum computers is to basically make use of these rules of quantum mechanics to process information. It’s pretty easy to understand how this can be so powerful. In classical computers, you give me a certain input, I put it in my computer, I give you an output. But if our hardware was quantum mechanical, rather than just sequentially providing some input and reading out the answers, I could prepare the computer register in the quantum superpositions of many different kind of inputs.

This means that if I then take this superposition state and process it using the laws of quantum mechanics, I can process many, many inputs at once. It could be potentially an exponential speed­up, compared to the classical programs.

F: What does a quantum computer look like?

ML: If you were to walk into a room with our quantum machine in it you would see a vacuum cell or tube and a bunch of lasers which shine into it. Inside we have a very low density of a certain atom. We use lasers to slow down the atomic motion very close to absolute zero, which is called laser cooling.

Image credit: Lukin lab – Harvard

F: So how do you program the thing?

ML:. To program a quantum computer, we shine a hundred tightly­-focused laser beams into this vacuum chamber. Each of these laser beams acts as a optical tweezer, grabbing one atom or not. We have these atom traps, each of which is either loaded or empty. We then take a picture of these atoms in these traps, and we figure out which traps are full and which are empty. Then we rearrange the trap containing single atoms in any pattern that we wish. This desired arrangement of single atoms, each individually held in and easily controlled, are positioned basically at will.

Positioning these atoms is one way that we can program it. To actually control the qubit, we gently, carefully, push the atoms from their lowest energy state into a higher energy state. We do this with carefully­ chosen laser beams that shoot to one specific transition. Their frequency is very tightly controlled. In this excited state the atom actually becomes very big and, because of this atom size, the atoms start interacting or – in other words – talking to each other. By choosing the state to which we excite the atoms and choosing their arrangements and positions, we can then program the interaction in a highly controllable way.

F: What kinds of applications would a quantum computer be most useful for?

ML: To be honest, we really don’t know the answer. It’s generally believed that quantum computers will not necessarily help for all computational tasks. But there are problems that are mathematically hard for even the best classical computers. They usually involve some complex problems, such as problems involving complex optimizations in which you try to satisfy a number of contradictory constraints.

Suppose you want to give some kind of collective present to a group of people, each of which has its own niche. Some of the niches might be contradictory. So what happens is, if you solve this problem classically, you have to check each pair or triplet of people to make sure that at least their niche is satisfied. The complexity of this problem grows in size very, very rapidly because the number of classical combinations you need to check is exponential. There is some belief that for some of these problems, quantum computers can offer some advantage.

Another very well­-known example is factoring. If you have a small number, like 15, it’s clear that the factors are 3 and 5, but this is the kind of problem that very quickly becomes complicated as the number grows. If you have a large number that is a product of two large factors, classically there is pretty much no better way to find what these factors are than just trying numbers from one, two, three, and so on. But it turns out that a quantum algorithm exists, called Shor’s algorithm, that can find the factors exponentially faster than the best ­known classical algorithms. If you can do something exponentially faster than using the alternative approach, then it’s a big gain.

F: It sounds like your mission, and that of others in your field, is to help us advance and understand this technology, but the applications are sort of secondary and will come when you have the tools. Does that seem about right?

ML: I will answer your question with an analogy. When classical computers were first developed, they were mostly used to do scientific calculations, numerical experiments to understand how complex physical systems behave. Right now, quantum machines are at this stage of development. They already allow us to study complex quantum physical phenomena. They are useful for scientific purposes, and scientists are already doing it now.

In fact, one significance of our papers [published in Nature] is that we have already built machines, which are large enough, and complex enough, and quantum enough to do scientific experiments that are very difficult to impossible to do on even the best possible classical computers — essentially supercomputers. In our work, we already used our machine to make a scientific discovery, which had not been made up until now in part because it’s very difficult for classical computers to model these systems. In some ways, we are now crossing the threshold where quantum machines are becoming useful, at least for scientific purposes.

When classical computers were being developed, people had some ideas of which algorithms to run on them. But actually it turned out that when the first computers were built, people were able to start experimenting with them and discovered many more practically efficient, useful algorithms. In other words, that’s really when they discovered what these computers can actually be good for.

That’s why I’m saying that we really don’t know now the tasks for which quantum computers will be particularly useful. The only way to find these tasks is to build large, functional, quantum machines to try these things out. That’s an important goal, and I should say that we are entering this phase now. We’re very, very close to a stage when we can start experimenting with quantum algorithms on large scale machines

F: Tell me a little bit about your Nature paper. What actually is the advance here? And how close are we to being able to start discovering the algorithms that could work on quantum computers?

ML: So first let’s talk about how one could quantify quantum machines. It can be done along three different axes. On one axis is the scale — how many qubits [a “quantum bit,” the unit that makes up the basis of quantum computer the way “bits” do in classical computing] it is. More is better. Another axis is the degree of quantum-ness, that is, how coherent these systems are. So eventually, the way to quantify it is that if you have a certain number of qubits, and you perform some calculations with that, what’s the probability that this calculation is error­-free?

If you have a single qubit, you have a small chance to make an error. Once you have a lot of them, this probability is exponentially higher. So the systems described in our paper, and also in the complementary paper, have large enough qubits and are coherent enough so that we can basically do the entire series of computations with fairly low error probability. In other words, in a finite number of tries, we can have a result that has no errors.

But this is still not the complete story. The third axis is how well you can program this machine. Basically if you can make each qubit talk with any other qubit in an arbitrary fashion, you can also encode any quantum problem into this machine. Such machines are sometimes called universal quantum computers. Our machine is not fully universal, but we demonstrate a very high degree of programmability. We can actually change the connectivity very quickly. This in the end, is what allows us to probe and to make new discoveries about these complex quantum phenomena.

F: Could a quantum computer be scaled down to the size of a phone, or something vaguely portable at some point?

ML: That is not out of the question. There are ways to package it so that it can actually become portable and potentially can be miniaturized enough maybe not to the point of a mobile phone, but perhaps a desktop computer. But that cannot be done right now.

F: Do you think, like classical computers, quantum computers will make the shift from just scientific discoveries to the average user in about 30 years?

ML: The answer is yes, but why 30 years? It could happen much sooner.

F: What has to happen between now and then? What kind of advances need to be made to get us there?

ML: I think we need to have big enough computers to start really figuring out what they can be used for. We don’t know yet what quantum computers are capable of doing, so we don’t know their full potential. I think the next challenge is to do that.

The next stage will be for engineering and creating machines that could be used maybe to target some specialized applications. People, including [my team], are already working on developing some small­scale quantum devices, which are designed to, for example, aide in medical diagnostics. In some of these applications, quantum systems just measure tiny electric or magnetic fields, which could allow you to do diagnostics more efficiently. I think these things are already coming, and some of these ideas are already being commercialized.

Then maybe, some more general applications could be commercialized. In practice quantum computers and classical computers will likely work hand-­in-­hand. In fact, most likely what would happen is that the majority of the work is done by classical computers, but some elements, the most difficult problems, can be solved by quantum machines.

There is also another field called quantum communication where you can basically transfer quantum states between distant stations. If you use quantum states to send information, you can build communication lines that are completely secure. Moreover, through these so­-called quantum networks, sometimes called quantum internet, we should be able to access quantum servers remotely. That way, I can certainly imagine many directions in which quantum computers can enter everyday life, even though you don’t carry it in your own pocket.

F: What’s something that you wish more people knew about quantum computers?

ML: Quantum computing and quantum technology have been in the news for some time. We scientists know that it’s an exciting area. It’s really the frontier of the scientific research across many subfields. Over the last five to 10 years, most people assumed that the developments have been very futuristic. They assumed that it will take a long time before we create any useful quantum machines.

I think that this is just not the case. I think we are already entering the new era with tremendous potential for scientific discoveries, which might have wide­ranging applications for material science, chemistry — really anything that involves complex physical systems. But I also feel that very soon we will start discovering what quantum computers can be useful for in a much broader scope, ranging from optimization to artificial intelligence and machine learning. I think these things are around the corner.

We don’t yet know what and how quantum computers will do it, but we will find out very soon.

The post Quantum Computing Is Going to Change the World. Here’s What This Means for You. appeared first on Futurism.

Futurism

Apple’s support for wireless charging and AR are the latest signs of ‘ubiquitous computing’


Technology is shrinking. As our gadgets evolve, they become smaller and smaller, so that they’re able to permeate every part of our lives and even our bodies. Headphones have lost their wires and been reduced to the size of buttons, and yet they can produce the sound of a complete orchestra. Now that the power of computers is tightly packed into tiny gadgets and wearables, the only logical next step is for them to disappear. Where will they go? The answer is: everywhere. Twenty years ago, futurist and physicist Michio Kaku wrote the following in his book Visions: “A consensus…

This story continues at The Next Web
The Next Web

Europe unveils roadmap for the next decade of quantum computing

Back in 2016, the EU invested 1 billion euros (almost $ 1.2 billion at today's exchange rates) in quantum computing. Now, a year and a half later, it's time for an update on what's happening thanks to a 150-page roadmap on European quantum technologie…
Engadget RSS Feed

Five key IIoT predictions for 2018: Collaboration, customer success, edge computing, and more

The global industrial IoT market is set to reach $ 933 billion by 2025, according to Grand View Research. Here, Sastry Malladi, CTO of FogHorn Systems, outlines what he think will happen in the space in 2018.

Momentum for edge analytics and edge intelligence in the IIoT will accelerate in 2018

Almost every notable hardware vendor has a ruggedized line of products promoting edge processing. This indicates that the market is prime for Industrial IoT (IIoT) adoption. With technology giants announcing software stacks for the edge, there is little doubt that this momentum will only accelerate during 2018. Furthermore, traditional industries, like manufacturing, that have been struggling to showcase differentiated products, will now embrace edge analytics to drive new revenue streams and/or significant yield improvements for their customers.

Additionally, any industry with assets being digitized and making the leap toward connecting or instrumenting brownfield environments is well positioned to leverage the value of edge intelligence. Usually, the goal of these initiatives is to have deep business impact. This can be delivered by tapping into previously unknown or unrealized efficiencies and optimizations. Often these surprising insights are uncovered only through analytics and machine learning. Industries with often limited access to bandwidth, such as oil and gas, mining, fleet and other verticals, truly benefit from edge intelligence. What’s more, those that apply edge intelligence are able to benefit from real-time decisions, as well as insights from voluminous streaming sensor data.

Due to the current pain points in the IIoT space and the edge technology availability to address them, we expect to see increased interest in edge analytics/ML from oil andgas, energy, utilities, transportation and other sectors interested in revamping their IIoT value.

Business cases and ROI are critical for IIoT pilots and adoption in 2018

The year 2017 was about exploring IIoT and led to the explosion of proof of concepts and pilot implementations. While this trend will continue into 2018, we expect increased awareness about the business value edge technologies bring to the table. Companies that have been burned by the “Big Data Hype” – where data was collected but little was leveraged – will assess IIoT engagements and deployments for definitive ROI. As edge technologies pick up speed in proving business value, the adoption rate will exponentially rise to meet the demands of ever-increasing IoT applications.

IIoT standards will be driven by customer successes and company partnerships

IIoT is just now getting attention from the major technology players. If anything, 2018 will see more new products coming to market, and there will be more to choose from in terms of standards. The next year or two will see stronger alliances, unlikely partnerships and increased merger and acquisition activity as the large technology companies seek innovation inside and outside their organizations. As for standards, they will be driven by success of customers and patterns of scalable IIoT solutions.

IT and OT teams will collaborate for successful IIoT deployments

IIoT deployments will start forcing closer engagement between IT and operations technology (OT) teams. Line of business leaders will get more serious around investing in digitization, and IT will become the cornerstone required for the success of these initiatives. What was considered a wide gap between the two sectors – IT and OT – will bridge thanks to the recognized collaboration needed to successfully deploy IIoT solutions and initiatives.

And will OT design affect IIoT apps? Yes, definitely. Recent research and field studies suggest that analytics tools are being made more accessible to end users, i.e. domain experts and plant operators. This means that advanced technology is now being made available to field workers, so operational decisions can be driven in real-time at the industrial location.

Edge computing will reduce security vulnerabilities for IIoT assets

While industries do recognize the impact of an IIoT security breach there is surprisingly little implementation of specific solutions. This stems from two emerging trends:

  • Traditional IT security vendors are still repositioning their existing products to address IIoT security concerns
  • A number of new entrants are developing targeted security solutions that are specific to a layer in the stack, or a particular vertical

This creates the expectation that, if and when an event occurs, these two classes of security solutions are sufficient enough. Often IoT deployments are considered greenfield and emerging, so these security breaches still seem very futuristic, even though they are happening now. Consequently, there is little acceleration to deploy security solutions, and most leaders seem to employ a wait-and-watch approach. The good news is major security threats, like WannaCry, Petya/Goldeneye and BadRabbit, do resurface IIoT security concerns during the regular news cycle. However, until security solutions are more targeted, and evoke trust, they may not help move the needle.

iottechnews.com: Latest from the homepage

AT&T plans edge computing test zone for Silicon Valley

AT&T plans edge computing test zone for Silicon Valley

US telco AT&T plans facility where partners can run edge computing experiments in areas such as self-driving cars and augmented reality.

With edge computing now firmly finding its place in the IoT, US telco AT&T has laid down plans to open an edge computing test zone in the Bay Area of northern California in early 2018.

The zone itself is intended to be a cross between a proof of concept (PoC) lab and a developer hack shop. Initial reports suggest that AT&T will invite partners to test connected applications there, such as self-driving car software, drones and augmented and virtual reality (AR/VR) innovations.

At launch, the zone will use a 4G LTE connection, but the engineering team behind the lab zone hope to upgrade to 5G once the final standards and equipment are ready.

Read more: AT&T fires up LTE-M network in the US

The next step

“Edge computing is the next step in the evolution of the network,” claimed Melissa Arnoldi, president of AT&T Technology and Operations. “As [fast] connectivity becomes ubiquitous, it also needs to become smart. Edge computing puts a supercomputer in your pocket, on your wrist, in your car, and right in front of your eyes.”

The company has suggested that edge computing’s core challenge is striking the right balance between functionality or power. For example, today, an AR app running on a smartphone can offer high-end images or longer battery life, but not both. Cranking up the visual detail burns through the battery. Reducing power consumption generally means graphics that aren’t as sharp.

The answer, then, according to the company at least, is to move processing to the cloud as the next logical step.

Read more: AT&T commits to $ 200 million investment in IoT start-ups

Where cloud comes in

The cloud computing model of service-based application delivery and data storage, processing and analytics is widely agreed to be a logical step not just for edge computing, but for the majority of IT deployments. Where AT&T may be offering additional insight is in the expertise it can draw from its heritage in network transmission technologies.

The company says that in today’s networks, physical distances between users and data centers creates latency. As requests and responses travel hundreds or thousands of miles, users often notice the delay.

“With edge computing, we’ll install graphics processors and other computers in cell towers, small cells and other parts of our network that are never more than a few miles from our customers. This is what’s known as the edge of the network. In addition, low latency is being built into 5G from the get-go. The result: you will be able to run high-end applications in the cloud, and it will feel like it’s all happening right on your device,” said the company.

Read more: Elemental Machines uses AT&T IoT tech to make lab equipment smarter

An Agile approach

Developers and other third parties will be invited to test and innovate at AT&T’s Palo Alto-based edge computing and, as with all R&D work, success is never guaranteed. But the company says its rapid innovation model (as in, Agile with a capital A) means it can move on quickly when an approach isn’t panning out and apply lessons learned to future projects.

“Our goal in this experiment is to find the right architecture, the right services and the right business value in this ecosystem,” said Igal Elbaz, head of AT&T Foundry. “It’s all about moving quickly and collaborating closely with third-party innovators and developers.”

Read more: AT&T delivers progress report on LTE-M rollout in Mexico

The post AT&T plans edge computing test zone for Silicon Valley appeared first on Internet of Business.

Internet of Business