Microsoft notes four main components being important in the PixelSense interface: direct interaction, multi-touch contact, a multi-user experience, and
object recognition.
Direct interaction refers to the user's ability to simply reach out and touch the interface of an application in order to interact with it, without the need for a mouse or keyboard.
Multi-touch contact refers to the ability to have multiple contact points with an interface, unlike with a mouse, where there is only one cursor.
Multi-user experience is a benefit of multi-touch: several people can orient themselves on different sides of the surface to interact with an application simultaneously.
Object recognition refers to the device's ability to recognize the presence and orientation of tagged objects placed on top of it. The technology allows non-digital objects to be used as input devices. In one example, a normal paint brush was used to create a digital painting in the software. This is made possible by the fact that, in using cameras for input, the system does not rely on restrictive properties required of conventional touchscreen or touchpad devices such as the capacitance, electrical resistance, or temperature of the tool used (see
Touchscreen). In the old technology, the computer's "vision" was created by a near-infrared, 850-nanometer-wavelength
LED light source aimed at the surface. When an object touched the tabletop, the light was reflected to multiple infrared cameras with a net resolution of , allowing it to sense, and react to items touching the tabletop. The system ships with basic applications, including photos, music, virtual concierge, and games, that can be customized for the customers. A feature that comes preinstalled is the "Attract" application, an image of water with leaves and rocks within it. By touching the screen, users can create ripples in the water, much like a real stream. Additionally, the pressure of touch alters the size of the ripple created, and objects placed into the water create a barrier that ripples bounce off, just as they would in a real pond. The technology used in newer devices allows recognition of fingers, tag, blob, raw data, and objects that are placed on the screen, allowing vision-based interaction without the use of cameras. Sensors in the individual pixels in the display register what is touching the screen. ==Hardware specifications==