You are using an older browser version. Please use a supported version for the best MSN experience.

Five latest scientific and technological developments that matter

LiveMint logoLiveMint 17-07-2017 Leslie D'Monte

Early adopters should get exclusive access for wider adoption: MIT Bitcoin study

If early adopters are not allowed to access new technologies first, those technologies may not be widely adopted by the masses, suggests new research from the Massachusetts Institute of Technology (MIT). The researchers went about the task of studying this hypothesis by examining adoption rates of the cryptocurrency Bitcoin among MIT students.

In 2014, the MIT Bitcoin Project offered all incoming freshmen access to $100 worth of bitcoins. Of the 4,494 MIT freshmen, about 3,100 joined the researchers’ experiment. Those students had five days to sign up on a waiting list, complete a survey, and create a digital wallet. The researchers first identified which students exhibited natural early adopter (NEA) traits compared to the other students, whom they refer to as natural late adopters (NLAs).

They classified as NEAs the first 25% of students who signed up to the waiting list, all within the first 24 hours. Surveys showed that those NEAs were also more likely to be top computer programmers, to have built mobile apps, and to use peer-to-peer payment apps, among other identifiers. These characteristics align with popular definitions of early adopters, who generally possess advanced technical skills that help them start using new technologies.

Bitcoins were distributed a few weeks after the sign-ups. But the researchers, MIT Sloan School of Management professors Christian Catalini and Catherine Tucker, randomly delayed distribution of the bitcoins to 50% of the students, both NEAs and NLAs, by another two weeks. They then tracked all Bitcoin transactions through blockchain—the underlying digital ledger used by Bitcoin—and through the students’ digital wallets.

Students who were identified as early adopters of Bitcoin, but whose payment was delayed, cashed out their balance and abandoned the technology at nearly twice the rate of early adopters who received their payment earlier. The early adopters who cashed out also influenced those around them to do the same in high numbers.

The study, published in the journal Science on 13 July, is the first to examine what happens “when natural early adopters are purposely denied first, exclusive access to new technologies,” Catalini said in a press statement.

The researchers discovered that the two-week cash-out rate of the NEAs who received their bitcoins late rose to 18%, well over the non-delayed NEA cash-out rate of 11%. Both groups of late adopters, on the other hand, showed cash-out rates of roughly 10%, suggesting they were indifferent to the delay.

The research suggests that NEAs find some value—monetary or social—in having exclusive access to new technologies. Moreover, there is also a “spillover” effect, where NLAs were more likely to drop Bitcoin if NEAs did—possibly because late adopters rely on early adopters to learn about new technologies, Catalini said.

The researchers, hence, suggest that identifying NEAs before going to market may be valuable, instead of relying on people lining up outside of the store. As an aside, the Bitcoin experiment also proved to be a boon to the majority of MIT undergraduates. More than 50% held on to their bitcoins, possibly hoping for the price to increase further, Tucker says. The $100 in Bitcoin they were given in 2014 is now worth more than $700.

You can make your own Supernova

A supernova is the explosion of a star, the largest explosion that takes place in space. Scientists study supernovae because although a supernova burns for only a short period of time, it tells scientists a lot about the universe. Supernovas are difficult to see in our own Milky Way galaxy because dust blocks our view. But you do not have to be a scientist to see a supernova or even have a telescope, to hunt for supernovas.

In fact, our own solar system is thought to have been formed when a nearby supernova exploded distributing these elements into a cloud of hydrogen that then condensed to form our sun and the planets.

But can we create our own supernova?

Working in collaboration with the Imperial College, London, and AWE Aldermaston a team of researchers, led by professor Gianluca Gregori of the department of physics in Oxford, is currently demonstrating precisely this at the Royal Society Summer Science Exhibition, a week-long showcase of cutting-edge science from across the UK.

The team was able to mimic some properties of these supernovae in the laboratory by using the most powerful lasers on earth, such as the ORION laser at AWE. Each output pulse from the laser only lasts for a few billionths of a second, but, in that time, the power it generates is equivalent to the output of the electricity grid of the whole planet.

The extremes of density and temperature produced by the lasers allow scientists to study how the supernova acts when it expands into space, and can also provide insight into how high energy particles from space are produced, how the magnetic field in the galaxy formed, and what the interior of a giant planet might look like.

Computer that reads body language

Humans use voice, facial expressions and even body movements to communicate. This is something that computers cannot do unless trained, which takes a lot of effort.

Researchers at the Carnegie Mellon University’s Robotics Institute, for instance, have now enabled a computer to understand body poses and movements of multiple people from a video in real time with a new method that was developed with the help of the Panoptic Studio—a two-storey dome embedded with 500 video cameras.

In a 6 July press note, the researchers pointed out that tracking multiple people in real time, particularly in social situations where they may be in contact with each other, presents a number of challenges. For instance, simply using programs that track the pose of an individual does not work well when applied to each individual in a group, particularly when that group gets large. Hence, the researchers first localized all the body parts in a scene—arms, legs, faces, etc.—and then associated those parts with particular individuals.

The challenges for hand detection were greater. As people use their hands to hold objects and make gestures, a camera is unlikely to see all parts of the hand at the same time. Unlike the face and body, large datasets do not exist of hand images that have been laboriously annotated with labels of parts and positions, the researchers noted. To overcome this challenge, the researchers made use of CMU’s multicamera Panoptic Studio because a single shot gives you 500 views of a person’s hand, plus it automatically annotates the hand position.

The Panoptic Studio is now is being used to improve body, face and hand detectors by jointly training them. Also, as work progresses to move from the 2-D models of humans to 3-D models, the facility’s ability to automatically generate annotated images will be crucial, the researchers point out.

Insights gained from experiments in that facility now make it possible to detect the pose of a group of people using a single camera and a laptop computer.

These methods for tracking 2-D human form and motion open up new ways for people and machines to interact with each other, and for people to use machines to better understand the world around them. The ability to recognize hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things.

Detecting the nuances of non-verbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behaviour also could enable new approaches to behavioural diagnosis and rehabilitation for conditions such as autism, dyslexia and depression.

In sports analytics, real-time pose detection will make it possible for computers not only to track the position of each player on the field of play, as is now the case, but to also know what players are doing with their arms, legs and heads at each point in time. The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the researchers have released their computer code for both multi-person and hand-pose estimation. The researchers will present reports on their multi-person and hand-pose detection methods at the Computer Vision and Pattern Recognition Conference (21-26 July) in Honolulu.

Quantum computer may be better at keeping secrets; even online

Researchers in Singapore and Australia have proposed a way to use a quantum computer securely, even over the internet—a technique that could hide both your data and program from the computer itself, the researchers wrote in the journal, Physical Review X, on 11 July.

“We’re looking at what’s possible if you’re someone just interacting with a quantum computer across the internet from your laptop. We find that it’s possible to hide some interesting computations,” says Joseph Fitzsimons, a Principal Investigator at the Centre for Quantum Technologies (CQT) at the National University of Singapore and Associate Professor at Singapore University of Technology and Design (SUTD), who led the work.

Conventional computers use bits, the basic unit of information in computing—zeros and ones. A quantum computer, on the other hand, deals with qubits (quantum bits) that can encode a one and a zero simultaneously—a property that will eventually allow them to process a lot more information than traditional computers, and at unimaginable speeds.

In the scheme suggested by the researchers, the quantum computer is prepared by putting all its qubits into a special type of entangled state (Quantum entanglement is a phenomenon in which the quantum states of two or more objects have to be described with reference to each other, even though the individual objects may be spatially separated). Then the computation is carried out by measuring the qubits one by one. The user provides step-wise instructions for each measurement: the steps encode both the input data and the program. The quantum computer can’t tell which qubits were used for inputs, which for operations and which for outputs.

The researchers acknowledge that although the owner of the quantum computer could try to reverse engineer the sequence of measurements performed, ambiguity about the role of each step leads to many possible interpretations of what calculation was done. The set of interpretations grows rapidly with the number of qubits.

However, developing a quantum computer, however, is easier said than done. The main hurdle is stability since calculations are taking place at the quantum level because of which the slightest interference can disrupt the process. Tackling the instability problem is one of the main reasons why the effort to make a quantum computer is expensive.

The concept of quantum computing, as conceived by American theoretical physicist Richard Feynman in 1982, though, is not new to governments and companies like International Business Machines Corp. (IBM), Microsoft Corp. and Google Inc. IBM, for instance, announced on 17 May that it is making a quantum computer with 16 quantum bits accessible to the public for free on the cloud, as well as a 17-qubit prototype commercial processor.

Seventeen qubits are not enough to outperform the world’s current supercomputers, but as quantum computers gain qubits, they are expected to exceed the capabilities of any machine we have today. That should drive demand for access.

Getting rid of fakes with quantum technology

Lancaster University researchers believe that their new quantum technology will make counterfeiting impossible, regardless of whether those are fake aerospace parts or fake luxury goods. The scientists have created unique atomic-scale IDs based on the irregularities found in 2D materials like graphene. On an atomic scale, quantum physics amplifies these irregularities, making it possible to ‘fingerprint’ them in simple electronic devices and optical tags, the researchers said in 6 July press note.

For the first time, the team will be showcasing this new technology via a smartphone app which can read whether a product is real or fake, and enable people to check the authenticity of a product through their smartphones. The customer will be able to scan the optical tag on a product with a smartphone, which will match the 2D tag with the manufacturer’s database. This has the potential to end product counterfeiting and forgery of digital identities, two of the costliest crimes in the world today.

Imports of counterfeited and pirated goods around the world cost nearly $0.5 trillion in lost revenue. Counterfeit medicines alone cost the industry over $200 billion every year, the researchers point out. They expect this patented technology and the related application to be available to the public in the first half of 2018.

More From LiveMint

image beaconimage beaconimage beacon