A lot of emerging technologies have come up in the recent past that have the potential to change today's use of electronic devices and the user behaviour.
First of all, researchers at Google have introduced a new user interface concept called Google Glass. It consists of a mini computer and a head-up display mounted on glasses combined with communication functionality. The device has the potential to bring the concept of augmented reality to a new level and opens a wide range of new usability concepts for applications. Until now, the device is not available in Europe but the commercial launch in the U.S. has started in May 2014. Also other companies are working on similar concepts, e.g., Samsung "Galaxy Glass". The acquisition of Oculus Rift – producer of Virtual Reality Headsets – by Facebook for 2 billion US $ also points to the importance of this field.
Bringing a wearable device to the user to extend smartphone functionalities and enable new usability concepts is another upcoming concept. The most common idea for this is a smart watch. Several companies have already launched such devices, e.g., Qualcomm Toq, MyKronoz ZeNano, SONY SmartWatch, or the watch from Pebble or Apple. These devices are connected via Bluetooth to the user’s smartphone and are equipped with a small touch screen to control smartphone functionalities and display some basic information about running applications. A slightly different approach is the Nismo Watch introduced by Nissan. This watch is connected to the car instead of connected to the smartphone and displays driving parameters, e.g., speed, and some body parameters like heartbeat and skin temperature. This enables the detection of tiredness of the driver.
In January 2015, Microsoft announced that its “HoloLens”, a pair of holographic glasses, which provides the user with real-time, interactive augmented reality, will most likely come to market by end of 2015. This technology is seen by experts as probably the most outstanding forthcoming development in the field of wearable computers.
The idea of a smart home is also of important interest for the leading IT companies. Apple has introduced HomeKit as framework to control connected devices at home and Google has for example invested a lot to buy Nest Labs, a specialist in home automation. This might bring another big change in the ability of mobile Apps in the next future, e.g., switching off the lights if a user leaves his home or starting the heating just some minutes before arriving at home.
A further rapidly growing innovation area is the ability to connect smartphones with cars and bring mobile applications into the car. A lot of proprietary solutions are introduced recently by automakers, OEMs and collaboration consortia, e.g., Open Automotive Alliance, iviLink, CarPlay, MirrorLink, mySPIN, Renault R-Link, and Ford SYNC. Thereby, two basic diverging concepts are pursued. On the one hand the focus is to execute apps on the smartphone and use in car HMI (human machine interface) technologies to control the apps, e.g., steering wheel buttons, voice control or an in-car touch screen. On the other hand, a different approach is to bring a mobile phone OS (operating system) to the car's built-in IVI (In-Vehicle Infotainment) system. Both concepts have in common to bring an execution platform for mobility apps into the car.
Head Up Displays (HUD), which project information directly onto the windscreen, allow drivers to see e.g., navigation advise, dashboard information, etc., without looking away from the street. Companies such as e.g., Garmin, Navdy, etc. offer HUDs that can be used in any car. Based on the heads-up display technology, the British carmaker Jaguar Land Rover is currently developing a “Follow-Me Ghost Car Navigation”, where an image of a vehicle, which the driver can follow turn-by-turn, is projected onto the windscreen. The company is also researching how e.g., pedestrians and cyclists can be highlighted to the driver by a moving halo projected on the windscreen. Furthermore, Jaguar Land Rover is experimenting with a “360 Virtual Urban Windscreen”, aiming to give the driver an unobstructed view of the outside by showing a respective camera feed on screens embedded in the roof support pillars of the car.
With respect to SIMPLI-CITY, these new technologies are rather a market opener than a disabler. The ability to bring mobility-related apps directly into the car will strongly ease the integration of SIMPLI-CITY apps in cars in the near future.
Many automakers (e.g. Mercedes-Benz, Lexus, BMW, Toyota, Volvo, Daimler, Nissan, Audi, Ford) and even Google have demonstrated autonomous driving technologies and are developing complex systems that allow cars to drive without a driver. As an autonomous vehicle, a car is capable of sensing its environment and navigating without human input. They sense their surroundings with such techniques as radar, laser, GPS, high definition camera, ultrasonic sensor, and computer vision.
The introduction of such vehicles to the urban environment could mean a revolution. However, it will take time for such autonomous driving features to be commercially available, as new regulation has to be created and an extremely high level of reliability and safety has to be achieved. Also ethical and legal aspects have to be considered: what dilemmas might a car-computer be confronted with and what about the liability in case of an accident: automaker, software developer, or owner?
Introduction to the roads might not be far away anymore: over a million of accident-free kilometres have already been driven with test-vehicles in urban environments and partial autonomy (like for parking) is already standard and continuously expanding.
In order to improve the user experience, carmakers are working on the forthcoming “intelligent vehicle”, which will ultimately adjust to the user preferences. While automatic personalised basic car setup, such as setting the seat and steering wheel positions and internal temperature according to the drivers preferences, is already quite state-of-the-art, e.g., Jaguar Land Rover has announced in July 2014 that it is working on the “self-learning intelligent car of the future”, a new in-car intelligence system that recognises the driver by her/his smartphone, and will not only setup the car according to her/his preferences but will also connect for example to the driver’s calendar, anticipate journeys and deliver navigation and traffic guidance.