Screen-less Touchscreens: 3-D Screens Manipulated by Gesture
The touchscreen, something out of sci-fi movies just a few years ago, is now a ubiquitous part of modern society. Everything from phones and tablets to cars and refrigerators feature touchscreens. Now, fully three dimensional displays that can be operated with gestures may be the next big thing. We’ve seen the start of gesture technology in our mobile devices and video game systems. However, in the movie Minority Report, for example, we were treated to a screen-less touchscreen in midair that was manipulated with simple gestures. That future may not be as far off as you think.
The technology is being dubbed the Leia Display System or LDS. The imaging technology allows a person to pass right through a display (you’ll get a little wet, but we’ll get to that in a moment), or interact with it through various motions. Don’t expect to have one of these in your office in the next year or so. For the most part, it seems like initial uses will be for marketing gimmicks. The firm has posted various promotional images showing ways in which the technology may benefit marketers and special events. For example, one displayed a luxury car appearing to drive right out of a screen (which was actually just a holographic image). Another portrayed fashion models interacting with the LDS on a catwalk.
Of course, video games are high on the priority list of applications of this new tech. The developers also envision architectural applications, as well as industrial uses. But how does it work?
It’s actually less hologram and more projection. All the action takes place inside a frame that produces a constant mist. The image is then projected onto the mist. The gesture control system actually functions independently of the display, and is similar to other modern gesture control technology.
Right now, this screen-less touchscreen is available in two sizes. A smaller, more manageable device of approximately two feet by three feet, and a very large size that is closer to ten by eight. The smaller device can operate for ten hours on a mere gallon of water, but the larger display requires a gallon per hour.
An In-Home Sensor Can Get Seniors Timely Care in Case of a Fall
For older ones, falling may become a constant fear. A serious fall can result in the loss of independence, and perhaps even relocation to a nursing home. Receiving expedient care is important in case a fall occurs. Unfortunately, for many seniors who live alone, it could be hours before they make it to a phone or have anyone find them in their injured state. That’s why several German companies have joined forces to create the safe@home automated system.
Sensors are in place on the ceiling of each room of the home. These sensors watch for signs that the person has fallen and is not getting up. It also listens for the accompanying cries for help. If such signs are detected and the person has not righted themselves in a reasonable amount of time, the system automatically kicks into action. What does that action involve?
The first thing it does is call the resident. That way if the fall was detected in error, the person has the chance to pick up the phone and let the system no there is no crisis at hand. If not, the system will call out to everyone it is programmed to. This may include emergency services, family members, and friends who serve as emergency contacts.
What are some other advantages of this system? All of the user’s personal information stays in house, so their privacy is not affected as it might be with an external monitoring company. The system does not require any maintenance from the user, so you don’t have to worry about grandma climbing a chair to change a battery. The system does not require home alteration to be installed, making it a cheaper alternative than some other systems.
Look for this system to be available by the end of 2014.
Tracking Subjects Outside of Line of Sight
Is there a better way to track someone on the other side of a wall than by using wi-fi signals? How about adding radio waves? WiTrack is a device recently developed by researchers at MIT that uses wi-fi to track objects. An antenna on the device sends a signal to the subject, while the other three wait for a response and measure the delay to triangulate a position. The device can also track the movement of its subject.
Due to bandwidth limitations, however, wi-fi makes it difficult to accurately track location. That’s why an extremely weak radio signal was added to the device. Now, the device can track movement to within eight inches, sometimes being as accurate as four inches. The device can even track a subject’s position in a room three-dimensionally, where the wi-fi-only device could only track two-dimensionally.
What are the applications of this technology? It could be used in a rescue situation to find someone that has been trapped inside a structure. It could also be used to find a criminal’s location within a structure in a hostage situation. Elderly persons can be located in their home when paramedics arrive on scene.
There are applications to video games, as well. Right now, all the major systems have spatial recognition software, but they require a clear line of sight to the device. This technology would remove that limitation.
How Google™ Glass Connects With Humanity
While we aren’t seeing the streets lined with people sporting Google™ Glass quite yet, that future may not be far off. And while many hope the connectivity will be a positive advancement, there are also concerns that the carry-on distraction, this “wearable computer,” may just lead to more disconnect from reality.
Right now, it would cost you about $1500 to become a beta tester—and that’s why you aren’t seeing more people wearing the glasses of the future. You get what you pay for, of course, as you’re literally wearing the Internet. Tilt your head, say a few words and the whole world gets pumped into your eyes and ears. But with everything being shoved directly into your head, what happens to the part of the world that’s actually around you?
Here’s the scenario. Glass keeps Google, the Internet and social media constantly in front of a person. So how are people going to function while using it, and where will this focus come from? The answer may be from everything else. In reality, its effect depends on the user. Will you allow Glass to become your reality or simply a supplement of it? Will you be reading your email instead of focusing on the road? Will you be updating your status while on a date?
Even worse—will anyone notice? You don’t have to take Google™ Glass out of your pocket to check on something; people may just assume that you’re inattentive rather than distracted, as so many other people are. Glass is designed to be out of direct range of vision, thus being less obtrusive. The fact is people may just spend a lot more time focusing on that spot rather than seeing what’s directly ahead.
Again, this is all subjective, and each person who uses Glass will react differently. Just think about how people are with their cell phones, however—being attached to them and distracted by them constantly. Is it possible that Google Glass might create the real zombie apocalypse? If so, will any of us even object to being taken by it?
You’ll never have to walk back from the front door to your bedroom to turn off the light before leaving again – just put your hand up and make the right motion to flip a switch. What about when you are in the kitchen cooking and a song comes up that you don’t like? Don’t wash the raw meat off your hands – just put one of those hands in the air and flip to the next track as if you were at a jukebox.
This is what the world is coming to as computer scientists explore more options in the gesture recognition scope. Gesture recognition isn’t just to make you look silly playing video games anymore – and forget about having to stand in front of a device. Wi-Fi will make it possible to be your own human remote control device from anywhere in your home.
Wireless signals are typically used to let all your devices connect to the Internet or one another – the idea here is to repurpose those wireless signals to pick up specific gestures you make anywhere in the vicinity. All you need is a Wi-Fi router that has been adapted, along with some wireless devices to control. Oh yeah, and your hand!
How does Wi-Fi accomplish this? Unlike present gesture recognition devices, Wi-Fi doesn’t rely on line of sight. Since wireless signal can transmit through walls you can make your gestures from anywhere. Since our bodies have their own frequency we actually affect wireless signals in a very slight way as it is – researchers have just come up with a way to detect these minor shifts.
Presently researchers have the system detecting 9 unique gestures – a study was conducted in which five individuals performed 900 different gestures, and the system was correct 94% of the time.
This makes it the first system of this magnitude that doesn’t require the setting up of cameras or sensors in each room – it just needs an extra antenna for each new person that the system is required to recognize. This ensures that a spastic guest can’t flail around and turn your house into a discotheque! It also keeps the system from getting hacked – you have to perform a specific sequence of gestures as a password.
You may finally be about to get your wish and be able to turn on the coffee maker before you even get out of bed!