Advances in Computers thread

Roll-up TV is 18-incher, expect 60-inch plus by 2017

Mention "new curved or flexible displays" and that is quite enough to get all the media dogs barking. Thursday's news went further. LG Display announced two new 18-inch OLED panels: the first is a transparent display, while the second can be rolled up into a tube.

The press release stated, "LG Display, the world's leading innovator of display technologies, announced today that it has developed a 18-inch flexible OLED panel that is rollable as well as a 18-inch transparent OLED panel. "The company's 18-incher has a level of flexibility where, yes, one can roll it up into a tube. The flexible OLED panel has a high-definition class resolution of 1200 X 810 with almost 1 million mega-pixels. The panel's curvature radius is 30R. Darren Quick of Gizmag commented on the numbers: "Unlike the aforementioned 77-inch flexible TV that has a fairly limited range of changeable curvature, LG Display's latest flexible OLED panel boasts a curvature radius of 30R. This means the 18-inch panel can be rolled up into a cylinder with a radius of 3 cm (1.18 in) without the function of the 1,200 x 810 pixel display being affected."

Read more at: Roll-up TV is 18-incher, expect 60-inch plus by 2017
 
Last edited:
I think this is definitely on the horizon for monitors-and am shocked it wasn't targeted to monitors FIRST versus TV.

The Korean electronics manufacturer unveiled a new kind of big-screen display that is ultra-thin and rolls up, and it expects to put the tech into TVs by 2017.

Flexible displays have been around for years, but LG Display has taken the concept up a couple of notches with these two new 18-inch OLED panels, each with 1,200 x 800 resolution. Not only are they relatively large, but each can be rolled up tightly to a radius of just 1.2 inches without affecting the display at all.
 

Self assembly of 15,000 semiconductor chips per hour

Self assembly of 15,000 semiconductor chips per hour

A first automated reel-to-reel fluidic self-assembly process for macroelectronic applications is reported. This system enables high speed assembly of semiconductor dies (15,000 chips per hour using a 2.5 cm wide web) over large area substrates. The optimization of the system (hour 99% assembly yield) is based on identification, calculation, and optimization of the relevant forces. As an application the production of a solid state lighting panel is discussed involving a novel approach to apply a conductive layer through lamination.


This communication reports on recent progress towards a first implementation of a self-assembly machine that is based on surface-tension-directed-self-assembly. The reported assembly process is no longer a discontinuous small-batch hand-operated process but resembles an automated machine like process involving a conveyer belt and a reel-to-reel (RTR) type assembly approach with automated agitation. As a comparison, the assembly rate of conventional chip level pick-and-place machines depends on the cost of the system and number of assembly heads that are used. For example, a high-end FCM 10000 (Muehlbauer AG) flip chip assembly system can assemble approximately 8000 chips per hour achieving a placement accuracy of 30 μm. Our current design achieves 15,000 chips per hour using a 2.5 cm wide assembly region which is only a factor of 2 better than one of the faster pick-and-place machines; scaling to 150,000 chips per hour, however, would be possible using a 25 cm wide web, which would be a factor of 20 faster. In principle, scaling to any throughput should be possible considering the parallel nature of self-assembly. In terms of placement accuracy our precision increase with a reduction of chip and solder bump size. Generally, it exceeds the 30 μm limits for the components that have been used. Under optimized operational conditions, we achieved an assembly yield of 99.8% using the self-assembly process. As an application the assembly machine is applied to the realization of area lighting panels incorporating distributed inorganic light emitting diodes(LEDs)
 
Monolithic 3D Integration of Carbon Nanotube Logic Transistors could provide a thousand fold power reduction for computer processors

The crystal ball is murky beyond the 7-nm node. Transistors made with carbon nanotubes as the channel material hold special promise because of the ultra-thin body of the carbon nanotube of about one nanometer while at the same time retaining excellent carrier transport properties. No other bulk semiconductor has this unique advantage, which allows the carbon nanotube transistor to scale to the shortest possible gate length.

Stanford's Philip Wong summarized the recent development of carbon nanotube transistor technology for digital logic. This includes: synthesis of fully aligned carbon nanotube on a wafer scale, device fabrication of high performance carbon nanotube transistors, 3D integrated carbon nanotube circuits, low voltage (0.2 V) operation of carbon nanotube transistors, compact models for circuit simulation, performance benchmarking of carbon nanotube transistor with conventional CMOS at the device and also at the full-chip processor level, and demonstration of circuits and complete systems.

Philip Wong described a theoretical 3D chip stack interleaving next-generation memory and logic technologies made with carbon nanotubes. Privately, he acknowledged the material still faces huge challenges before it is ready for practical use. Wong showed a "club sandwich" made from carbon nanotubes. It interleaved layers of resistive and magnetic RAM with logic layers made from 1D and 2D field effect transistors.


Monolithic 3D Integration of Carbon Nanotube Logic Transistors could provide a thousand fold power reduction for computer processors
 
IBM invests $3 billion to extend Moore’s law with post-silicon-era chips and new architectures


IBM invests $3 billion to extend Moore?s law with post-silicon-era chips and new architectures | KurzweilAI
IBM announced today it is investing $3 billion for R&D in two research programs to push the limits of chip technology and extend Moore’s law.

The research programs are aimed at “7 nanometer and beyond” silicon technology and developing alternative technologies for post-silicon-era chips using entirely different approaches, IBM says.

IBM will be investing especially in carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing.

7 nanometer technology and beyond

IBM researchers and other semiconductor experts predict that semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.

However, scaling down to 7 nanometers by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing, IBM says.
 
TSMC is finally making 20nm parts for Apple’s next-gen iPhone, iPad

TSMC is finally making 20nm parts for Apple?s next-gen iPhone, iPad | ExtremeTech

For years, analysts have reported on the shadowy negotiations between the world’s largest foundry, TSMC, and Apple as the two companies haggled and discussed the shape of future collaboration. Now, the fruits of that collaboration are finally moving towards the light of day — TSMC has reportedly begun volume shipments of 20nm silicon earmarked for Apple’s next-gen iPhone (and possibly iPad). The new chip, likely codenamed the A8, will be the first flagship part built at TSMC instead of at Samsung, and it’s a major coup for the Taiwanese company to have stolen the business from its Korean rival.

Exactly how much of Apple’s business is shifting to TSMC is still unknown. The A8 will be the first 20nm SoC available on the market; companies like Qualcomm aren’t expected to introduce their own 20nm hardware until 2015. That gap gives Apple first-mover momentum and it’s undoubtedly part of what the company paid for in its agreements with TSMC. It’s possible that this shift could spark other companies to move production to other facilities — companies that compete with Apple at TSMC could conceivably move business to Samsung or GlobalFoundries if they think the Taiwanese foundry won’t be able to keep up with demand.
 
Intel Corporation to Detail Its 14-Nanometer Process Technology in September

Intel Corporation to Detail Its 14-Nanometer Process Technology in September (INTC)

At the upcoming Intel (NASDAQ: INTC ) Developer Forum in San Francisco, Intel is likely to make a number of interesting announcements around its future chips and accompanying platforms. However, this year investors will get a special treat as Intel will finally take the wraps off of its next-generation 14-nanometer manufacturing technology.

At long last, the cold, hard technical details surrounding transistor performance, gate density, and metal pitch will be unveiled. This will help investors build a much better picture of how Intel's 14-nanometer process stacks up against competing processes from Taiwan Semiconductor (NYSE: TSM ) and Samsung (NASDAQOTH: SSNLF ) .

What will Intel reveal?
The last time that Intel did a process disclosure was back at the 2012 Intel Developer Forum in September. Though many had hoped for a reveal of the 14-nanometer technology at some point during 2013, Intel kept its cards pretty close to its vest. The only real details we know are relatively vague performance and density metrics given in a slide at the company's 2013 investor meeting.


Intel Could Show Off 10-Nanometer Wafers This September

http://www.fool.com/investing/gener...d-show-off-10-nanometer-wafers-this-sept.aspx
While some seem to believe that Intel (NASDAQ: INTC ) may lose its manufacturing technology lead to the likes of Taiwan Semiconductor (NYSE: TSM ) and Samsung (NASDAQOTH: SSNLF ) , reality is likely to be rather different. In fact, at an upcoming developer conference, Intel could show evidence that its lead is quite intact.

Intel launching first 14-nanometer products, demonstrating 10-nanometer?
According to Digitimes, Intel will be launching its first 14-nanometer Broadwell products under the Core M brand at the 2014 Intel Developer Forum in September. As a quick reminder, Core M is a family of products intended for fanless clamshells and detachable/convertible 2-in-1 designs. The rest of the designs -- aimed at higher power and performance notebooks -- will roll out over the course of 2015.

More interestingly, though, is that Digitimes reports that Intel will demonstrate 10-nanometer wafers at the same time. We've known that Intel's chip teams have been designing on 10-nanometer for quite some time, so it wouldn't be farfetched for Intel to demonstrate a wafer of test chips.
 
Last edited:

A Wall Becomes A Collaborative Space



Txchnologist
Korean researchers are fine-tuning a display system that could upgrade collaborative work and play. The TransWall, being built at the Korea Advanced Institute of Science and Technology’s Design Media Lab, is a two-sided see-through touchscreen. It allows people to interact with it and each other, and provides audio and tactile feedback to users.

TransWall works through two projectors on each side of the device that produce images on a holographic screen film, which is sandwiched between two transparent acrylic sheets. A surface transducer attached to the displays makes the screens interactive. Its developers say the system is meant to facilitate interpersonal communication and gaming.

 
Last edited by a moderator:
Project Adam: a new deep-learning system
Developed by Microsoft, Project Adam is a new deep-learning system modelled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry. The goal of Project Adam is to enable software to visually recognise any object. This is being marketed as a competitor to Google's Brain project, currently being worked on by Ray Kurzweil.

Microsoft Research shows off advances in artificial intelligence with Project Adam | Next at Microsoft
 
Samsung, Google Inc.'s Nest Labs Unveil ‘Thread’ Network For Smart Homes

Samsung, Google Inc.'s Nest Labs Unveil ?Thread? Network For Smart Homes

Thread is backed by Silicon Valley's biggest tech titans. Will it catch on?

Some of Silicon Valley's biggest names are betting that your home is about to get smarter. Google Inc.’s (NASDAQ:GOOGL) Nest Labs, Samsung and six other manufacturers announced a new network designed to connect the Internet of Things on Tuesday. The group calls it “Thread,” and it spelled out Thread's advantages over existing wireless technologies.

Analysts expect modern “smart homes” to include a number of sensor-wielding appliances capable of communicating with each other and humans: Nest’s smartphone-controlled smoke alarm can notify a homeowner at work if it detects smoke, while Samsung Electronics’ (KRX:005930) touchscreen refrigerator reads the latest tweets aloud to users during breakfast. And while Apple, Inc.’s (NASDAQ:AAPL) HomeKit may offer a central hub for all of those devices, it's not meant to keep them connected on a central network.
 
Last edited:
BAE Systems announces Striker II HMD for combat pilots

BAE Systems announces Striker II HMD for combat pilots
This week at the Farnborough Airshow, BAE Systems showed off its latest Helmet-Mounted Display (HMD), the Striker II flight helmet. The unit not only provides digital, visor-projected night vision and tracking systems that are equivalent or better than current HMD systems, but it has also seen a weight reduction for greater safety and comfort.

The Striker II HMD is based on BAE’s Striker HMD system used in the Typhoon and Gripen fighters. BAE says that the Striker II is "platform agnostic" and integrates easily with a variety of platforms, including both digital and analog electronic displays.

The night vision system mounted inside the helmet makes the helmet lighter than previous units and lowers its center of gravity. This makes the helmet more comfortable (relatively) and puts less stress on the pilot’s head, neck, and shoulders resulting from the g-forces pulled during the tight turns that fighter planes are famous for. The system needs no manual configuration for day to night transitions and, along with the plane’s system and targeting displays, feeds into the integrated visor-projected system.

The high-resolution visor-projected system has a 40-degree binocular field of view with 1280x1024 resolution and an independent channel for each eye to provide 3D images. BAE says that the display has near-zero latency and is fully visible in day and night conditions.

In addition, the Striker II boasts new hybrid opto-inertial technology that constantly monitors the position of the pilot’s head even if optical tracking fails. So the plane’s computer always knows where the pilot is looking and can position symbols on the display accurately for high-precision target tracking and engagement.

"As the industry transitions from analogue to digital display solutions, Striker II brings a superior, fully digital capability to multiple platform types," says Joseph Senftle, vice president and general manager for Communications and Controls Solutions at BAE Systems. "Designed to address evolving mission requirements with advanced digital night vision technology, our new HMD was built to be 'future proof' and seamlessly adaptable to technology advancements in the years ahead."
 
Last edited:
The world’s first photonic router
A step toward building quantum computers

The world?s first photonic router | KurzweilAI

Weizmann Institute scientists have demonstrated the first photonic router — a quantum device based on a single atom that enables routing of single photons, a step toward overcoming the difficulties in building quantum computers.

A photonic switch

At the core of the device is an atom that can switch between two states. The state is set just by sending a single particle of light — or photon — from the right or the left via an optical fiber.

The atom, in response, then reflects or transmits the next incoming photon, accordingly. For example, in one state, a photon coming from the right continues on its path to the left, whereas a photon coming from the left is reflected backwards, causing the atomic state to flip.

In this reversed state, the atom lets photons coming from the left continue in the same direction, while any photon coming from the right is reflected backwards, flipping the atomic state back again. This atom-based switch is solely operated by single photons — no additional external fields are required.
 
Fundamental photoresist chemistry findings could help extend Moore's Law

(Phys.org) —Over the years, computer chips have gotten smaller thanks to advances in materials science and manufacturing technologies. This march of progress, the doubling of transistors on a microprocessor roughly every two years, is called Moore's Law. But there's one component of the chip-making process in need of an overhaul if Moore's law is to continue: the chemical mixture called photoresist. Similar to film used in photography, photoresist, also just called resist, is used to lay down the patterns of ever-shrinking lines and features on a chip.

Now, in a bid to continue decreasing transistor size while increasing computation and energy efficiency, chip-maker Intel has partnered with researchers from the U.S. Department of Energy's Lawrence Berkeley National Lab (Berkeley Lab) to design an entirely new kind of resist. And importantly, they have done so by characterizing the chemistry of photoresist, crucial to further improve performance in a systematic way. The researchers believe their results could be easily incorporated by companies that make resist, and find their way into manufacturing lines as early as 2017.

The new resist effectively combines the material properties of two pre-existing kinds of resist, achieving the characteristics needed to make smaller features for microprocessors, which include better light sensitivity and mechanical stability, says Paul Ashby, staff scientist at Berkeley Lab's Molecular Foundry, a DOE Office of Science user facility. "We discovered that mixing chemical groups, including cross linkers and a particular type of ester, could improve the resist's performance." The work is published this week in the journal Nanotechnology.

Read more at: Fundamental photoresist chemistry findings could help extend Moore's Law
 
Last edited:

Jibo the first family robot could revolutionize personal robotics by solving ease of use robotics like iPads for tablets and iPhones for smartphones

Jibo the first family robot could revolutionize personal robotics by solving ease of use robotics like iPads for tablets and iPhones for smartphones

JIBO, The World's First Family Robot, has raised $864,000 on Indiegogo and still has 26 days left to go on its crowdfunding campaign.

It is scheduled to be available by December 2015, Jibo will be capable of interacting with its owners; for now, it is just a prototype, but that could soon change.

Social robotics - that's the idea behind Jibo, and Cynthia Breazeal, an associate professor at the Massachusetts Institute of Technology, has worked in the field for years. Involved in MIT's Personal Robots Group, she has been focusing on developing the principles, techniques, and technologies for personal robots.

Breazeal and her team used simple approach towards designing Jibo.
At first glance, the 11-inch tall robot -- with a six-inch base -- resembles more of a retro television than a 21st century robot. But rest assured, it will be loaded with all the amenities of current technology, such as Bluetooth and WiFi.

Come next December, Jibo is expected to be able to have the following capabilities that will allow him act as an assistant, reminding you of upcoming events; a storyteller, complete with sound effects, graphics and physical movements to boot; a photographer, noticing smiles to automatically take a photo; a messenger and telepresence avatar, allowing users to communicate; as well as act as an companion.

How JIBO Works
Setup

* Follow JIBO's instructions to connect him to your WiFi network
* Teach JIBO to recognize your face & voice
* Learn what you can ask JIBO to do
* Download the JIBO mobile app (Android & iOS) to connect JIBO to your mobile devices
* Connect to Devices

Your JIBO Network can include:

* Mobile devices
* Personal computers
* Other JIBOs
 
The birth of topological spintronics

The discovery of a new material combination that could lead to a more efficient approach to computer memory and logic will be described in the journal Nature on July 24, 2014. The research, led by Penn State University and Cornell University physicists, studies "spin torque" in devices that combine a standard magnetic material with a novel material known as a "topological insulator." The team's results show that such a scheme can be 10 times more efficient for controlling magnetic memory or logic than any other combination of materials measured to date.

Read more at: The birth of topological spintronics
 
World's fastest supercomputer gets even faster
World's fastest supercomputer gets even faster - Headlines, features, photo and videos from ecns.cn|china|news|chinanews|ecns|cns

China's Tianhe-2, the world's fastest supercomputer, began an upgrade on Wednesday, said the National Supercomputer Center in Guangzhou, in south China's Guangdong Province.

The upgrade will continue till the end of Aug. or early Sept. and increase overall computing speed from 54 to more than 100 petaflops per second, said Yuan Xuefeng, center director. It is still able to handle high levels of analyzing, computing and processing during the upgrade.

Tianhe-2 was developed by the National University of Defense Technology and has been in commercial operation since April.

In 2015, a hardware upgrade will start, after which the "super brain" is expected to be completely powered by domestically made chips.

Tianhe-2 occupied the top spot for the third time in the biannual Top500 list of supercomputers at the end of June. Its computing capacity in one hour equals that of the whole population of China using calculators for 1,000 years.
 
London mayor expected to say city will rock 5G by 2020

London mayor Boris Johnson this week will pledge to bring 5G to London in the next six years, reported The Telegraph on Monday. The pledge is part of a more extensive plan for London's infrastructure between now and 2050. The scheme is also part of a collaboration with the University of Surrey. Mayors of cities typically like to underscore something unique or superior about their place and in Johnson's case, he is emphatic about showing off London's full promise vis a vis digital connectivity. The delivery of 5G would also make London the site of the world's first major 5G mobile network deployment.

Read more at: London mayor expected to say city will rock 5G by 2020
 
Researchers achieve 5TB-per-second fiber-optic network milestone
Researchers achieve 5TB-per-second fiber-optic network milestone - SlashGear
While you're busy pining away for Google Fiber, a group of researchers at the Technical University of Denmark have been busy putting it to shame. Trumping their last network milestone achieved back in 2009, the group has developed a fiber network that pushes more than 5TB of data per second through a single optical cable.

The network gives users speeds of 43Tbps, which works out to about 5.4TB per second. As the folks at Extreme Tech pointed out, such speeds would allow you to download a 1GB movie in 0.2 milliseconds -- or better said, in the blink of an eye.

The university -- henceforth called DTU -- is notable for many reasons, not the least of which was being the first to exceed the single terabit milestone, something that took place back in 2009. Though hard at work, Karlsruhe Institute of Technology set the latest record back in 2011 at 26 terabits, something that has persisted until now.

The speeds were achieved using a single-laser and single-fiber setup, with multi-core fiber being used to hit the faster speeds. While you won't be seeing these speeds in your home any time soon, it is an important milestone for ushering us closer.
 
Researchers eliminate need for external power in Wi-Fi connectivity system

Researchers eliminate need for external power in Wi-Fi connectivity system

One of the advantages of the "connected world" is that myriad different devices can interact with each other over Wi-Fi to exchange data, control equipment, and generally lay the foundations of the Internet of Things of the not-too-distant future. Unfortunately, on the downside, all of the Wi-Fi connections need power to operate, and this severely restricts the pervasiveness of this technology. However, researchers at the University of Washington have developed a system that they say eliminates the need for power supplies for these connections by using what is known as radio frequency (RF) backscatter technology.

The researchers claim that their prototype technology uses radio signals as a source of power and incorporates this in existing Wi-Fi infrastructure to deliver connections to the internet for devices. The power is sourced via RF Wi-Fi backscatter that exists as reflected energy whenever a wireless router or other radio frequency device transmits (similar to the technology found in RF ID tags, where the circuit remains dormant until radio signals on the device’s antenna create an induced voltage in the circuit to power the device).

In effect, the system scavenges power from the wireless transmitting devices around it to power battery-free devices and connect them to the Internet. Previous technological challenges in providing such Wi-Fi connectivity was that even low-power Wi-Fi consumes three to four times more power than can generally be wrought from Wi-Fi backscatter signals.
 

Forum List

Back
Top