Freitag, 19. Juni 2009

worlds first mobile Augmented Reality browser

AMSTERDAM, Tuesday June 16th, 2009. Mobile innovation company SPRXmobile launches Layar, worlds first mobile Augmented Reality browser, which displays real time digital information on top of reality (of) in the camera screen of the mobile phone. While looking through the phones camera lens, a user can see houses for sale, popular bars and shops, jobs, healthcare providers and ATMs. The first country to launch Layar is The Netherlands. Launching partners are local market leaders ING (bank), funda (realty website), Hyves (social network), Tempo-team (temp agency) and Zekur.nl (healthcare provider).




How it works
Layar is derived from location based services and works on mobile phones that include a camera, GPS and a compass. Layar is first avaliable for handsets with the Android operating system (the G1 and HTC Magic). It works as follows: Starting up the Layar application automatically activates the camera. The embedded GPS automatically knows the location of the phone and the compass determines in which direction the phone is facing. Each partner provides a set of location coordinates with relevant information which forms a digital layer. By tapping the side of the screen the user easily switches between layers. This makes Layar a new type of browser which combines digital and reality, which offers an augmented view of the world.

Sonntag, 14. Juni 2009

magic Interaction

a one-week word for interaction design for an interactive table. Funny :D

Designed in a one-week Interaction design project at GSA (Glasgow School of Art), the video shows a concept for an interactive table, that responds to the volumes of peoples' conversations, and their gestures, with motion sensors. In doing so, OLED lights reflect the amount of input some one has by revealing a pattern on the table surface, and by the lamp center piece getting brighter. Initially inspired for intimate situations such as dating, where body language and equal input to a conversation are subconsciously noted and none the less very revealing. Both the concept and the video were created within a week.

Body Gesture Interaction Systems

This video shows the results of the PhD thesis project of title "Markerless Full-Body Human Motion Capture and Combined Motor Action Recognition for Human-Computer Interaction" developed by Luis Unzueta(http://luis-unzueta.vndv.com/) at CEIT ( http://www.ceit.es/mechanics/index.htm ).



THE_SPACE_BETWEEN_US

The space between us' is a motion tracking installation which is a trigger for interaction between people.
Most new media communication systems today are
channeled through the medium of written text, for example, email, chat, sms.
Our bodies remain passive and the lack of non-verbal cues often leads to misunderstanding.
In this installation, participants are encouraged to use their body to explore new ways of expression through new media and body language. This will create a responsive environment which has the potential to generate interaction. The system responds to physical movement, location, proximity, body structure and posture. These parameters will be collected in real time and then translated into an audio-visual language.'The space between us' examines the instance of the computer as an agent for communication between people. It offers a playful experience which aims to bring people together and celebrate their non-verbal skills in communication.




XTR

Extreme Reality (XTR) is a world leader in 3D Human-Machine Interaction.
Providing software based solution for 3D computer control using hands & body movements.

XTR developed proprietary real-time high-resolution software that analyzes 3D human motions using one simple web cam without any additional accessories.

It allows users to interact with computers and gaming consoles, play games and interact in virtual worlds using natural human motions instead of using keyboards, mouses and joysticks, or any other hardware devices. And use their hands as
a virtual mouse.

XTR universal interface can be tailored and seamlessly integrate without modifications to existing applications. It also runs on mobile devices, and may be ported to any embedded platform.



Project Natal Xbox 360: Full body interaction






connecting digital and physical world:

Freitag, 12. Juni 2009

Multi-User Handheld Projector Demo

Funny interaction using projectors. I don't like the input device, but I like the intraction and some points they considered like privacy od information, and information sharing.

Cross-Dimensional Gestural Interaction Techniques

Interesting way of making information private using gestures:

Multi Display Systems

Multi-user, multi-display interaction:

They are using 5 displays (3 on the walls, one multi touch table, one tablet PC). The application is the Google Earth. Each display show another information on the map, like roads, schools. The input device it touch of the multi touch table and for the tablet PC a stylus.



Jonathan Grudin has worked in this area.


Car display:

value="true">

Fraunhofer IITB: Multi-Display-Arbeitsplatz für den Krisenstab

Stiching method: From microsoft research:




E-conic: a perspective-aware interface for multi-display environments





This is a good method for our Media Room, while using microsoft surface with cube, by using tracking system in addition.

Project Description:
Multi-display environments compose displays that can be at different locations from and different angles to the user; as a result, it can become very difficult to manage windows,read text, and manipulate objects. We investigate the idea of perspective as a way to solve these problems in multi-display environments. We first identify basic display and control factors that are affected by perspective, such as visibility, fracture, and sharing. We then present the design and implementation of E-conic, a multi-display multi-user environment that uses location data about displays and users to dynamically correct perspective. We carried out a controlled experiment to test the benefits of perspective correction in basic interaction tasks like targeting, steering, aligning, pattern-matching and reading. Our results show
that perspective correction significantly and substantially improves user performance in all these tasks.







Interesting Working Groups in this area:

Advanced Interaction for Multi-display Environments:
http://hci.usask.ca/research/mdes.shtml


i-Land Roomware Second Generation

"We describe the i-LAND environment which constitutes an example of our vision of the workspaces of the future, in this case supporting cooperative work of dynamic teams with changing needs. i-LAND requires and provides new forms of human-computer interaction and new forms of computer-supported cooperative work. Its design is based on an integration of information and architectural spaces, implications of new work practices and an empirical requirements study informing our design. i-LAND consists of several roomware components, i.e. computer-aug- mented objects integrating room elements with information technology. We present the current realization of i-LAND in terms of an interactive electronic wall, an interactive table, two computer-enhanced chairs, and two bridges for the Passage-mechanism. This is complemented by the description of the creativity support application and the technological infrastructure. The paper is accompanied by a video figure in the CHI99 video program"




WeSpace:


People and Groups working in the area of Multi Display Environments:

desney S. Tan:
http://research.microsoft.com/en-us/um/people/desney/projects.htm






CRISTAL on multitouch coffee table (Control of Remotely Interfaced Systems using Touch-based Actions in Living spaces)


This system can enable remote controlling of interactive displays using touch table.






more infromation can be found under:
http://mi-lab.org/projects/cristal/



shared design space: Office of tomorrow:

http://www.officeoftomorrow.org/index.php?id=8


A good workshop in the filed of MDE with the title:
Beyond the Laboratory: Supporting Authentic Collaboration with Multiple Displays
at CSCW2008
http://workshops.fxpal.com/cscw2008/AcceptedPapers.aspx


Using tablet PC with a large vertical display for editing and viewing photos. They use the concept of Chinese dinner table.




Project Deskotheue:

It is about using multi displays in office and meeting rooms.

http://studierstube.icg.tu-graz.ac.at/deskotheque/

They are using Caleydo System, which is a information visualization for biomedical data:



http://www.caleydo.org/

Dynamo


Acer Tempo M900

This mobile phone has:
- 3.8 ' display (iPhone has 3.5', Toshiba TG 01 has 4.1' which is the largest till date)
- Fingerprint sensor(multi functional: protect the device, to luck account information(PIM), optical mouse cursor)
- Touch Display
- 5 MegaPixel camera
- Slide out keyboard
- HSDPA,WVGA screen(800*480, which is greatest windows mobile can support), WiFi,GPS, bluetooth
- Software

Unboxing:


Hardware:



Software:
the Acer shell is a 3D view which is trying to emulate a physical desktop.

Mittwoch, 3. Juni 2009

HCI group at Konstanz University

Loading...