Advances in Computers thread

Text messages direct to your contact lens
New technology that will allow information, such as text messages from a mobile phone, to be projected onto a contact lens worn in the human eye has been developed by Belgian researchers.
.
12:52PM GMT 07 Dec 2012
Comments
Ghent University's centre of microsystems technology has developed a spherical curved LCD display which can be embedded in contact lenses and handle projected images using wireless technology.

"Now that we have established the basic technology, we can start working towards real applications, possibly available in only a few years," said Professor Herbert De Smet.

Unlike previous contact lens displays, which are limited to a few small pixels to make up an image, the new technology allows the whole curved surface of the lens to be used.

One application suggested by the researchers is a "one pixel, fully covered contact lens acting as adaptable sunglasses".

"This is not science fiction," said Jelle De Smet, the chief researcher on the project, who believes commercial applications for the lenses will be available within five years.
Text messages direct to your contact lens - Telegraph
 
Sprint and Oakley owned by that Virgin Mobile British guy are coming out with that next year, it will be more like a HUD (heads up display), voice commanded and holographic interface when wearing the glasses, and they will look cool too, so when Sprint and Oakley stock come down, start buying.
 
Engineers make tiny, low-cost, terahertz imager chip

The new terahertz chips developed by Caltech electrical engineers, shown with a penny for scale. A secret agent is racing against time. He knows a bomb is nearby. He rounds a corner, spots a pile of suspicious boxes in the alleyway, and pulls out his cell phone. As he scans it over the packages, their contents appear onscreen. In the nick of time, his handy smartphone application reveals an explosive device, and the agent saves the day.

Sound far-fetched? In fact it is a real possibility, thanks to tiny inexpensive silicon microchips developed by a pair of electrical engineers at the California Institute of Technology (Caltech). The chips generate and radiate high-frequency electromagnetic waves, called terahertz (THz) waves, that fall into a largely untapped region of the electromagnetic spectrum—between microwaves and far-infrared radiation—and that can penetrate a host of materials without the ionizing damage of X-rays.

When incorporated into handheld devices, the new microchips could enable a broad range of applications in fields ranging from homeland security to wireless communications to health care, and even touchless gaming. In the future, the technology may lead to noninvasive cancer diagnosis, among other applications. "Using the same low-cost, integrated-circuit technology that's used to make the microchips found in our cell phones and notepads today, we have made a silicon chip that can operate at nearly 300 times their speed," says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. "These chips will enable a new generation of extremely versatile sensors." Hajimiri and postdoctoral scholar Kaushik Sengupta (PhD '12) describe the work in the December issue of IEEE Journal of Solid-State Circuits. Researchers have long touted the potential of the terahertz frequency range, from 0.3 to 3 THz, for scanning and imaging.

Such electromagnetic waves can easily penetrate packaging materials and render image details in high resolution, and can also detect the chemical fingerprints of pharmaceutical drugs, biological weapons, or illegal drugs or explosives. However, most existing terahertz systems involve bulky and expensive laser setups that sometimes require exceptionally low temperatures. The potential of terahertz imaging and scanning has gone untapped because of the lack of compact, low-cost technology that can operate in the frequency range.

Read more at: Engineers make tiny, low-cost, terahertz imager chip

This is pretty interesting.
 
Last edited:
Researchers develop the smallest indium gallium arsenide transistor ever built


A cross-section transmission electron micrograph of the fabricated transistor. The central inverted V is the gate. The two molybdenum contacts on either side are the source and drain of the transistor. The channel is the indium gallium arsenide light color layer under the source, drain and gate. Silicon's crown is under threat: The semiconductor's days as the king of microchips for computers and smart devices could be numbered, thanks to the development of the smallest transistor ever to be built from a rival material, indium gallium arsenide.

The compound transistor, built by a team in MIT's Microsystems Technology Laboratories, performs well despite being just 22 nanometers (billionths of a meter) in length. This makes it a promising candidate to eventually replace silicon in computing devices, says co-developer Jesús del Alamo, the Donner Professor of Science in MIT's Department of Electrical Engineering and Computer Science (EECS), who built the transistor with EECS graduate student Jianqian Lin and Dimitri Antoniadis, the Ray and Maria Stata Professor of Electrical Engineering.

To keep pace with our demand for ever-faster and smarter computing devices, the size of transistors is continually shrinking, allowing increasing numbers of them to be squeezed onto microchips. "The more transistors you can pack on a chip, the more powerful the chip is going to be, and the more functions the chip is going to perform," del Alamo says.

Read more at: Researchers develop the smallest indium gallium arsenide transistor ever built


---

Silicon nanophotonics: Using light signals to transmit data

An IBM 90nm Silicon Integrated Nanophotonics technology is capable of integrating a photodetector (red feature on the left side of the cube) and modulator (blue feature on the right side) fabricated side-by-side with silicon transistors ( red sparks on the far right). Silicon Nanophotonics circuits and silicon transistors are interconnected with nine levels of yellow metal wires. (Phys.org)—IBM announced today a major advance in the ability to use light instead of electrical signals to transmit information for future computing. The breakthrough technology – called "silicon nanophotonics" – allows the integration of different optical components side-by-side with electrical circuits on a single silicon chip using, for the first time, sub-100nm semiconductor technology.

Read more at: http://phys.org/news/2012-12-silicon-nanophotonics-transmit.html#jCp
 
Last edited:
RIGZONE - BP Starts Building World's Biggest Commercial Research Computer

BP has begun building a new supercomputing complex for commercial research that it claims will be the biggest in the world at its Westlake Campus in Houston, the company reported Friday.

The project is designed to keep BP at the forefront of seismic imaging technology and, the firm said, will be a critical tool in its global hunt for oil and natural gas in coming years.
 
Toshiba develops MRAM for smartphone processors

The Japanese conglomerate says MRAM can replace the SRAM currently used with mobile CPUs

Toshiba develops MRAM for smartphone processors - Computerworld

.
IDG News Service - Toshiba has developed a low-power, high-speed version of MRAM memory that it says can cut power consumption in mobile CPUs by two-thirds.

The company said Monday that its new MRAM (magnetoresistive random access memory) can be used in smartphones as cache memory for mobile processors, replacing the SRAM that is widely used today.

"Recently, the amount of SRAM used in mobile application processors has been increasing, and this has increased the power usage," said Toshiba spokesman Atsushi Ido.

"This research is focused on cutting the power consumption, while increasing speed, as opposed to increasing the amount of memory."
 
IBM ushers in the age of silicon nanophotonics


Years ago IBM began its gradual shift from consumer-facing computer company to focusing more on research targeting "big ideas." Now one of those deep research areas has yielded fruit that could influence the next phase of general computing and it's called silicon nanophotonics.

The development allows for the integration of electrical circuits alongside optical components on a single silicon chip. The announcement was made in conjunction with this week's IEEE International Electron Devices Meeting in San Francisco. This new combination configuration facilitates data transfers of up to 25 gigabits per second and could improve the efficiency and overall architecture of large data centers. The sub-100nm semiconductor technology was first trumpeted a couple of years ago by IBM Research, but it is only now ready for commercial deployment
IBM ushers in the age of silicon nanophotonics | DVICE
 
Engineers develop new magnetoelectric computer memory

By using electric voltage instead of a flowing electric current, researchers from UCLA's Henry Samueli School of Engineering and Applied Science have made major improvements to an ultra-fast, high-capacity class of computer memory known as magnetoresistive random access memory, or MRAM.

The UCLA team's improved memory, which they call MeRAM for magnetoelectric random access memory, has great potential to be used in future memory chips for almost all electronic applications, including smart-phones, tablets, computers and microprocessors, as well as for data storage, like the solid-state disks used in computers and large data centers. MeRAM's key advantage over existing technologies is that it combines extraordinary low energy with very high density, high-speed reading and writing times, and non-volatility—the ability to retain data when no power is applied, similar to hard disk drives and flash memory sticks, but MeRAM is much faster.

With MeRAM, the UCLA team has replaced STT's electric current with voltage to write data into the memory. This eliminates the need to move large numbers of electrons through wires and instead uses voltage—the difference in electrical potential—to switch the magnetic bits and write information into the memory. This has resulted in computer memory that generates much less heat, making it 10 to 1,000 times more energy-efficient. And the memory can be more than five-times as dense, with more bits of information stored in the same physical area, which also brings down the cost per bit.

Read more at: Engineers develop new magnetoelectric computer memory
 
Last edited:
PhD student creates AI machine that can write video games
by Bob Yirka Enlarge (Phys.org)—

Micheal Cook, a PhD researcher in the Computational Creativity Group at Imperial College in Britain, along with colleagues, has released a video game that was written in part by an Artificial Intelligence (AI) "machine." The video game, called "A Puzzling Present" is the latest co-developed by an AI machine named Angelina.

Cook et al have been working on ways to program AI machines to write video games and Angelina is the result – a system made up of various code modules that allow for learning to take place. In the case of Angelina, the learning comes about by examining and borrowing code from existing video games and applying them in unique or novel ways to new games that are being developed. At the heart of the new games are properties known as mechanics – code that gives characters special characteristics, such as the ability to fly, bounce or jump when commanded to do so by the human player.
Read more at: PhD student creates AI machine that can write video games
 
German research could be a route to faster computer chips

18 December 2012

.
Researchers in Germany claim to have paved the way for faster and more powerful computer chips by combining two kinds of technology.



The scientists from two research institutes based in Berlin and Frankfurt have managed to integrate traditional silicon-based circuits with indium-phosphide circuits on a single semiconductor wafer, which could lead to chips that operate at terahertz frequencies without the need to completely replace existing technology.

These could be used for applications that require large amounts of computing power such as high-resolution imaging systems for medical and security technology, as well as ultra-broadband mobile communication applications.

‘It was particularly challenging to make both technologies compatible at the interfaces,’ said Wolfgang Heinrich of the Ferdinand-Braun-Institut (FBH), who carried out the research with Bernd Tillack of Innovations for High Performance Microelectronics (IHP).

‘We managed to align both technology worlds so smoothly that the circuits deliver fully the specified high-frequency performance. This also demonstrates what added value can be created by bundling the competencies of two institutes such as IHP and FBH.’

The research is an attempt to address one of the problems with the continuing trend for smaller circuitry driven by the need for more computing power on individual chips.

When processors using silicon-based CMOS (Complementary Metal Oxide Semiconductor) circuitry operate at speeds of 100GHz or higher, their breakdown voltage decreases and so their available power output declines.

This means their capability to generate sufficiently strong signals to establish a radio link and to detect material defects becomes insufficient.

The IHP and FBH team combined a standard CMOS circuit with a second indium-phosphide circuit in a sandwich-like design, which it found increased the breakdown voltage and gave higher output powers at high frequencies.

Because the technology retains the use of CMOS circuits, future devices using this approach would benefit from the established production methods and low costs of traditional electronics.

To make the combined chip work, the researchers had to merge the whole development environment of both technologies, including the software for the circuit layout, and ensure very high precision in order to adjust the circuits with an accuracy of less than 10 micrometers. The next steps will be to further stabilise the process and to optimise the circuits.


Read more: German research could be a route to faster computer chips | News | The Engineer
 
Small, portable sensors allow users to monitor exposure to pollution on their smart phonesDecember 18, 2012


(Phys.org)—Computer scientists at the University of California, San Diego have built a small fleet of portable pollution sensors that allow users to monitor air quality in real time on their smart phones. The sensors could be particularly useful to people suffering from chronic conditions, such as asthma, who need to avoid exposure to pollutants.

http://phys.org/news/2012-12-small-portable-sensors-users-exposure.html
 
AirHarp uses Leap Motion to let you play music with the air

Earlier this year we went hands-on with Leap Motion and its gesture interface that promises to bring the seemingly high-priced dream of Minority Report-style interfaces to the masses. We're still a few months from its official launch, but one developer has already come up with an amazing innovation on the technology called the AirHarp

AirHarp uses Leap Motion to let you play music with the air | DVICE



Leap Motion giving 10,000 developers free Leaps
Startup with revolutionary gesture control technology is giving 10,000 more developers free units, and it is updating its SDK with a new library of pre-defined interaction APIs.

http://news.cnet.com/8301-11386_3-57559110-76/leap-motion-giving-10000-developers-free-leaps/

by Daniel Terdiman
| December 18, 2012 5:00 AM PST
Leap Motion, which created an innovative gesture control technology that measures users' movements to an accuracy of a hundredth of a millimeter, is expanding its developer program and releasing a new software development kit.

According to Michael Buckwald, CEO of the San Francisco startup, Leap Motion is giving 10,000 developers free Leap units over the next two weeks in a bid to dramatically increase the number of potential applications being designed to work with the new technology.


All told, 40,000 people have applied to be part of Leap Motion's developer program, in part because the number of potential applications that could integrate the company's gesture control technology is almost limitless.

When Leap Motion first announced its technology, it expected the Leap would be ideal for disrupting industries like surgery, gaming, architecture, design, engineering, and more. But almost from the get-go, some of the most interesting projects developers were suggesting involved things like automatically translate sign language.

Some developers proposed using the Leap to fly planes or drive cars, or to support physical rehabilitation and special needs. More than 400 people suggested using the Leap in computer-aided design software -- the same computing challenge that led Leap co-founder and CTO David Holz to begin creating the technology in 2008.

Leap Motion has said that 14 percent of developers want to do gaming-related applications, while 12 percent want to use the technology for music or video applications, 11 percent for art and design, 8 percent for science and medicine, and 6 percent for robotics. At launch, the company plans an Apple-style app store, and more than 90 percent of developers asking for SDKs want to sell their work through such a store. All told, developers have proposed more than 40,000 different applications.


Now this technology could be huge!
 
Last edited:
Smartphone 'Lab' Detects Allergens in Food



The iTube is a new smartphone add-on, still in its prototype stage, that lets anybody test food for allergens such as peanuts using their phones.


A smartphone that can be transformed into a lab with the ability to detect food allergens is the latest in add-on technology from inventor Aydogan Ozcan. He and his researchers are creating prototypes of these devices that turn the phones into precise lab instruments.

The iTube, Ozcan and his colleagues' new device, converts smartphones into colorimeters that are able to detect minute amounts of allergens, such as peanuts, in food. It's designed for use at home or in public, such as at a restaurant, said Ozcan, an engineering professor at the University of California, Los Angeles.

Smartphone 'Lab' Detects Allergens in Food | TechNewsDaily.com



LG refreshes Magic Remote with Siri-style speech smarts
Chris Davies, Dec 19th 2012 Discuss [0]

http://www.slashgear.com/lg-refreshes-magic-remote-with-siri-style-speech-smarts-19261595/
LG knows you scream at your TV, but now it wants some good to come from it, with a new version of its Magic Remote that can control not only an LG smart TV but other A/V kit in your rack too. The updated lounge wand now supports conversational commands, meaning you can ask for specific content – such as “Show me Call Me Maybe videos” – and it will know what you’re asking for.


That’s all down to LG’s new Natural Language Recognition software, which does some Siri-style parsing to understand exactly what you’re asking for. Though the previous version could follow basic voice commands, it did so in a more step-wise fashion: first ask for the source, then ask for the search term, and then refine after that.

LG’s new Magic Remote keeps the gesture control, as well as the pointing feature which allows you to navigate the TV’s UI simply by waving it around – you can choose a channel number by tracing it in the air, for instance – and there’s also a scroll-wheel for zipping quickly through lists. The buttons are backlit,and the whole thing is smaller and more ergonomically curved, which LG says makes for easier use.

WOW, holy shit. Some cool stuff is hitting the market!
 
Last edited:
DIY augmented reality eyepatch boosts senses
Chris Davies, Dec 19th 2012 Discuss [0]

Rate This

Augmented reality has blown up in 2012 thanks to Google’s Project Glass, but a DIY eyepiece likened to a hearing aid for those without 3D vision shows there’s more to wearables than recording point-of-view video. Gregory McRoberts‘ Borg-like DIY eyepatch augments his vision with senses humans wouldn’t normally be blessed with: the ability to “see” temperature and precise distance.
DIY augmented reality eyepatch boosts senses - SlashGear
 
The “Z Space” display, developed by Californian company Infinite Z, tracks a user’s eye and hand movements and adjusts the 3-D image that he or she sees in real-time, MIT Technology Review reports.

The resulting effect is stunning. Unlike the 3-D video seen in a movie theater or on a 3-D TV, you can move your head around an object — to look it from the side or from below, for instance — and the Z Space display will adapt and show you the correct perspective.

The technique, which the company calls “Virtual Holographic 3-D,” also lets you manipulate virtual objects as if they really were floating just inches in front of you.

A special stylus connected to the display also contains sensors that allow its movement to be tracked in three dimensions. You can use the stylus to “grab” parts of the virtual image in front of you and move them around in 3-D space.

A display that makes interactive 3D seem mind-bogglingly real | KurzweilAI

If Obama doesn't screw things up...We can expect some cool stuff to hit the market over the next 10-15 years.
 
Last edited:
Artificial intelligence helps sort used batteries


Wed, 12/19/2012 - 12:07pm

Artificial intelligence helps sort used batteries | News | R&D Magazine
Research at the University of Gothenburg and Chalmers University of Technology has resulted in a new type of machine that sorts used batteries by means of artificial intelligence (AI). One machine is now being used in the U.K., sorting one-third of the country’s recycled batteries.

‘I got the idea at home when I was sorting rubbish. I thought it should be possible to do it automatically with artificial intelligence,’ says Claes Strannegård, who is an AI researcher at the University of Gothenburg and Chalmers University of Technology.

Strannegård contacted the publically owned recycling company Renova in Gothenburg, who were positive to an R&D project concerning automatic sorting of collected batteries. The collaboration resulted in a machine that uses computerized optical recognition to sort up to ten batteries per second.

The sorting is made possible by the machine’s so-called neural network, which can be thought of as an artificial nervous system. Just like a human brain, the neural network must be trained to do what it is supposed to do. In this case, the machine has been trained to recognize about 2,000 different types of batteries by taking pictures of them from all possible angles.
 
On-Demand Synaptic Electronics: Circuits That Learn and Forget



On-Demand Synaptic Electronics: Circuits That Learn and Forget | ZeitNews

Researchers in Japan and the US propose a nanoionic device with a range of neuromorphic and electrical multifunctions that may allow the fabrication of on-demand configurable circuits, analog memories and digital-neural fused networks in one device architecture.

Synaptic devices that mimic the learning and memory processes in living organisms are attracting avid interest as an alternative to standard computing elements that may help extend Moore's law beyond current physical limits.

However so far artificial synaptic systems have been hampered by complex fabrication requirements and limitations in the learning and memory functions they mimic. Now Rui Yang, Kazuya Terabe and colleagues at the National Institute for Materials Science in Japan and the University of California, Los Angeles, in the US have developed two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions.
 
All Systems Go for Highest Altitude Supercomputer

Fri, 12/21/2012 - 12:42pm


News | Product Design & Development

One of the most powerful supercomputers in the world has now been fully installed and tested at its remote, high altitude site in the Andes of northern Chile. This marks one of the major remaining milestones toward completion of the Atacama Large Millimeter/submillimeter Array (ALMA), the most elaborate ground-based telescope in history. The special-purpose ALMA correlator has over 134 million processors and performs up to 17 quadrillion operations per second, a speed comparable to the fastest general-purpose supercomputer in operation today.

The correlator is a critical component of ALMA, an astronomical telescope which is composed of an array of 66 dish-shaped antennas. The correlator's 134 million processors continually combine and compare faint celestial signals received by the antennas in the ALMA array, which are separated by up to 16 kilometres, enabling the antennas to work together as a single, enormous telescope. The information collected by each antenna must be combined with that from every other antenna. At the correlator's maximum capacity of 64 antennas [1] as many as 17 quadrillion calculations every second must be performed [2]. The correlator was built specifically for this task, but the number of calculations per second is comparable to the performance of the fastest general-purpose supercomputers in the world [3].
 
PA Consulting creates mobile basestation with Raspberry Pi
Brittany Hillen, Dec 22nd 2012 Discuss [0]

PA Consulting has created a mobile phone basestation using the Raspberry Pi. In doing so, they replaced a giant 30-foot GSM cell basestation that is scarcely larger than your Internet modem. The consulting group based in Cambridge, UK, detail briefly how they achieved this in a video that you can watch after the jump.

According to the consulting firm’s team, a variety of wireless experts using the Raspberry Pi, a radio interface, and a couple pieces of open source software create the mobile basestation. As you can see in the video, the team uses two cell phones and successfully has them communicate with each other. The purpose?

To show that it can be done, and for a very small price. “We’ve shrunk a 30ft basestation into a 3-inch Raspberry Pi and created our own mobile phone network. This proves what can be achieved through low cost, off the shelf systems.” Of course, they had to do this in a screened room in order to avoid running afoul of the law.

The system is run using three applications: OpenBTS, FreeSWITCH, and a script for assigning telephone numbers. OpenBTS is used for providing the GSM standard, while FreeSWITCH is used to route calls “in a similar way to Skype,” the consulting firm explains. Condescending a 30-foot tower into a 3-inch Raspberry Pi is perhaps the epitome of demonstrating low-cost solutions for the future.
PA Consulting creates mobile basestation with Raspberry Pi - SlashGear
 
LuminAR Bulb turns any surface into a touch screen

A new computer being developed at the Massachusetts Institute of Technology can display interactive images on any surface, just by screwing into a light socket.

The team behind the device – led by student Natan Linder – aims to create "a new form factor for a compact and kinetic projected augmented reality interface."

LuminAR combines a laser pico-projector, camera and wireless computer, with software that can recognise objects and sense when a finger or hand is touching the surface. It also functions as a scanner with built-in wi-fi.

The project was developed through 2010, and demonstrated earlier this year at the CHI Conference on Human Factors in Computing Systems. The team has now released a video of its design evolution and potential commercial applications
:
LuminAR Bulb turns any surface into a touch screen
 

Forum List

Back
Top