
Windows 7 Sensor and Location platform
One of the components of Windows 7 is the Sensor and Location platform . Sensor and Location is part of Windows 7, which allows you to organize work with various sensors and additional devices to measure something.
Why is this needed? Sensors are needed in order to simplify some trivial actions and save us from unnecessary worries in work. This is especially true for laptop owners whose life is very dynamic. Imagine that a light sensor is built into the computer, which is available for all applications and allows these applications to adjust their picture depending on the lighting. Another example would be a GPS sensor. In this case, the application can adjust to the area where you are currently located. For example, applications may display weather information specifically for the city where you are. In fact, a large number of examples can be given, it all depends on the imagination and specific cases. Applicationscontext-sensitive applications .

The question may arise - “but what, in fact, has changed?”, “Why this could not be done earlier?”. The answer is simple - earlier these scenarios could also be implemented. However, this was not so simple. In fact, working with external sensors was reduced to the exchange of information via the COM port and each sensor had its own specific API. For this reason, it was very difficult to organize some kind of universal software interface that could be operated simultaneously from several applications and this process would be transparent.
The Sensor and Location library solves this problem. With its help, you can access various sensors and receive information from them in a single for all styles. It is important that this problem is solved at the operating system level. Such a step can give a new impetus to the development of context-sensitive applications. The following is a diagram showing the structure of objects for working with sensors. Further we will consider this in more detail.

To connect the sensor to the Sensor and Location platform in Windows 7, you need to implement a driver for it and simple .NET wrapper classes to work with this sensor.
Of course, in the near future, end users are unlikely to be able to fully experience the power of this entire platform. This will take some time for the hardware developers to develop and integrate their sensors into the hardware platforms. However, we developers can begin to prepare for this today. Therefore, I plan to talk further about how to work with the Sensor and Location platform in the context of our business applications.
In order to conduct experiments not with virtual sensors, but with something more or less close to reality, we will use a device from Freescale semiconductor, built on the basis of the JMBADGE2008-B microcontroller. This device is a small board on which there are also several sensors - an accelerometer, a light sensor and buttons.

This device is designed specifically to demonstrate the capabilities of the Sensor and Location platform on Windows 7. In fact, anyone can buy it. Thus, this device is well suited to demonstrate this feature of Windows 7.
Before we look at specific applications, let's look at how the Sensor and Location platform works. Before the advent of Windows 7 and the Sensor & Location platform, the connection of various sensors was reduced to the implementation of the driver and software for it.

With such an organization, the task of interacting with external sensors is possible, but difficult. To do this, each application must interact with the API that the developer of the sensor and the software that services this sensor will offer. The problem is especially acute if the application must use many sensors of the same type from different manufacturers. How does the Sensor & Location platform suggest solving this problem?
At the operating system level, there are mechanisms for working with sensors. There is a standard unified programming interface for working with sensors - Sensor API. Moreover, all interactions with the sensor occur precisely through the Sensor API. It is important that the interaction with all sensors occurs in a single style. Now you do not need to integrate with native API through p / invoke.

In order to work with the Sensor and Location API, you must download the appropriate .NET Interop Sample Library. It has .NET wrappers for working with the Sensor API. There are several classes in it, with which you can work with sensors.
The SensorManager class is an entry point. Through it, you can get information about the sensors, as well as work with them. For example, using the GetSensorBySensorId <> method, you can access the sensor of interest to us. Each sensor must have a wrapper class that inherits from the base Sensor class. Three such implementations already exist in the .NET Interop Sample Library - AmbientLightSensor, Accelerometer3D, UnknownSensor.

The main idea when working with sensors is as follows. When the sensor state changes (connected / disconnected / active / etc), the StateChanged event is generated. This event is necessary to start or stop working with sensors. After communication with the sensor has been established, the DataReportChanged event is generated when new data is received. How often this event will be generated depends on the implementation of the sensor and the driver for it. When processing this event, you can read the state of the sensors and somehow change the operation of the application. For this purpose, the GetProperty method is used. In the parameters of this method, the identifier of the property to be read from the sensor is passed. As a rule, details of calls to this method are hidden in classes that are implemented for a specific sensor.
In addition, each sensor has its own identifier (GUID), by which a device can be identified. When implementing a wrapper class for a sensor, this ID is specified using the attribute. Thus, access to the sensor can be obtained both by explicitly specifying the identifier of this sensor, and by referring to this wrapper class.
Let's try to implement some examples of working with sensors that are available in the device from Freescale. We will work with two types of sensors - an accelerometer (allows you to measure the angle of the device) and a light sensor (measures the level of illumination in the room).
The first application that we are implementing will display the level of illumination in the form of a burning light bulb on the form. First, subscribe to the state change event in the Sensor API. This is necessary for the application to start working if the sensor is connected on the go. In the handler of this event, we get a list of all the sensors of the desired type and subscribe to the DataReportChanged event. In the handler of this event, we will read the value from the light sensor and write it to the TextBox on the form. Because the event is generated in an additional stream, you also need to make a call to the Dispatcher.Invoke method so that the processing goes in the main stream and we can interact with elements on the form. Thus, we obtain the following code.
Now in the TextBox on the form the current value of illumination is displayed. Now it’s easy to implement some kind of visualization for this. Using the bindings in WPF, we will display the degree of illumination in the form of bulbs. As a result, we get the following application.


Since it’s very difficult to judge the operation of the application from the photo, I recorded a short video that clearly shows how the sensor responds to the degree of illumination.
Demonstration >>
Another sensor is more interesting - it allows you to determine the degree of inclination of the device on different axes. To demonstrate the degree of inclination, we will take a three-dimensional model of the aircraft for the WPF application and will rotate it in space depending on the sensor. The principle of this application is similar to the previous one - we find the necessary sensors, subscribe to events and, when processing them, record the coordinates in the input fields on the form. After that, we attach the model coordinates to the values of these input fields.
As you can see from this example, the code for working with sensors has not changed very much. In fact, only the code for receiving data from sensors has changed, and the rest has remained unchanged.



As you can see from the photo when tilting the device, the sensor transfers information to the application and the coordinates of the model change. Thus, we can see the tilt effect of the three-dimensional model.
Demonstration >>
Interestingly, these sensors can use several applications at the same time. Also in one application you can use several sensors. Let's combine the rotation application of a three-dimensional model with a light sensor. In this case, in addition to turning the model, we will show the sun. If the illumination in the room decreases, then the sun will also be hiding. The more lighting in the room, the more intense the sun will shine. Accordingly, the code from the two previous examples is used in this application. Therefore, I will not give the code, but I will immediately show the result.



You can also watch this application in dynamics.
Demonstration >>
With these examples it is clearly seen that working with sensors in Windows 7 is very simple. However, for this you must have a driver for Windows 7 and a wrapper class for the Sensor & Location platform. As a rule, drivers are supplied by the manufacturer of the hardware platform, but the wrapper class can be implemented independently.
As I said before, the entry point is the SensorManager class. With the help of this class you can access the necessary sensors and work with them. This class has methods such as retrieving a list of all sensors, retrieving a sensor by ID or type, requesting to use a sensor, and also an event of a change in the number of sensors in the system.

Each sensor has two main types of identifiers - SensorId and TypeId. TypeId identifies a particular device class. For example, it can be used to obtain all light sensors in the system, or some other types of devices. SensorId is assigned uniquely to each device. For example, if the system has three motion sensors of the same type, then each will have a unique identifier. There is also CategoryId, which integrates sensors into categories.
Each identifier is a GUID. They are set by manufacturers when developing the device and drivers. Thus, you can get a specific sensor only by knowing its ID. Each sensor is represented by the Sensor class. It has general information about the sensor and methods that allow you to get data from generalized collections in untyped form. It is clear that such a data representation is not very convenient for our applications. Therefore, it is customary for each sensor to implement a wrapper class as part of the Sensor API. It is implemented by inheriting from the general Sensor class. The demo examples already have two such implementations - for the accelerometer and for the light sensor. However, in the device that we examined earlier there are also touch buttons, which can also be used. Therefore, let's implement such a class for this sensor.
We will define a new class that will be the descendant of the Sensor class. In order for it to be recognized in the Sensor API, you need to mark it with the SensorDescription attribute, in which indicate the TypeId for this type of sensors. In the Sensor base class, there are two important things for us - the DataReport property and the DataReportChanged event. This property contains data from the sensor, and the event is triggered when they change. The task of our class is to use this data and deliver it to the user of our class in a convenient form. To do this, create another small class that will deal with the analysis of information from DataReport.
We will find out experimentally that when you press button 1, code 1 is generated, when you press 2, code 2 is generated, when you press 3, code 4 is generated, and when you press 4, code 8 is generated. It can be seen that binary digits are used here. Code 0 is also generated if all buttons are released. Thus, we can write the following code.
In fact, this class is a wrapper in the Sensor API for the sensor we need. To use it, I must subscribe to the StateChanged event and receive information through the Current property.
To get a list of available sensors of a given type, you can use the GetSensorsByTypeId method of the SensorManager class. In this case, the TypeId of these sensors will be determined based on the specified SensorDescription attribute. Now, using these sensors, we can subscribe to the necessary event and receive data in a form convenient for the application. For example, we can display the state of button clicks on a form.
As a result, we get an application that looks like this.

Of course, the example with the implementation of such a sensor is quite synthetic. However, it clearly demonstrates the process of connecting the sensor to the Sensor API.
Also, if you need to implement your own driver for the device in order to connect to the Windows 7 Sensor and Location platform, I recommend that you contact the official resource.
I wish you success in creating your context-sensitive applications!
Demo Applications:
Ambient.zip
Accelerometer3D.zip
Combined.zip
ButtonSensor.zip
Why is this needed? Sensors are needed in order to simplify some trivial actions and save us from unnecessary worries in work. This is especially true for laptop owners whose life is very dynamic. Imagine that a light sensor is built into the computer, which is available for all applications and allows these applications to adjust their picture depending on the lighting. Another example would be a GPS sensor. In this case, the application can adjust to the area where you are currently located. For example, applications may display weather information specifically for the city where you are. In fact, a large number of examples can be given, it all depends on the imagination and specific cases. Applicationscontext-sensitive applications .

The question may arise - “but what, in fact, has changed?”, “Why this could not be done earlier?”. The answer is simple - earlier these scenarios could also be implemented. However, this was not so simple. In fact, working with external sensors was reduced to the exchange of information via the COM port and each sensor had its own specific API. For this reason, it was very difficult to organize some kind of universal software interface that could be operated simultaneously from several applications and this process would be transparent.
The Sensor and Location library solves this problem. With its help, you can access various sensors and receive information from them in a single for all styles. It is important that this problem is solved at the operating system level. Such a step can give a new impetus to the development of context-sensitive applications. The following is a diagram showing the structure of objects for working with sensors. Further we will consider this in more detail.

To connect the sensor to the Sensor and Location platform in Windows 7, you need to implement a driver for it and simple .NET wrapper classes to work with this sensor.
Of course, in the near future, end users are unlikely to be able to fully experience the power of this entire platform. This will take some time for the hardware developers to develop and integrate their sensors into the hardware platforms. However, we developers can begin to prepare for this today. Therefore, I plan to talk further about how to work with the Sensor and Location platform in the context of our business applications.
In order to conduct experiments not with virtual sensors, but with something more or less close to reality, we will use a device from Freescale semiconductor, built on the basis of the JMBADGE2008-B microcontroller. This device is a small board on which there are also several sensors - an accelerometer, a light sensor and buttons.

This device is designed specifically to demonstrate the capabilities of the Sensor and Location platform on Windows 7. In fact, anyone can buy it. Thus, this device is well suited to demonstrate this feature of Windows 7.
Before we look at specific applications, let's look at how the Sensor and Location platform works. Before the advent of Windows 7 and the Sensor & Location platform, the connection of various sensors was reduced to the implementation of the driver and software for it.

With such an organization, the task of interacting with external sensors is possible, but difficult. To do this, each application must interact with the API that the developer of the sensor and the software that services this sensor will offer. The problem is especially acute if the application must use many sensors of the same type from different manufacturers. How does the Sensor & Location platform suggest solving this problem?
At the operating system level, there are mechanisms for working with sensors. There is a standard unified programming interface for working with sensors - Sensor API. Moreover, all interactions with the sensor occur precisely through the Sensor API. It is important that the interaction with all sensors occurs in a single style. Now you do not need to integrate with native API through p / invoke.

In order to work with the Sensor and Location API, you must download the appropriate .NET Interop Sample Library. It has .NET wrappers for working with the Sensor API. There are several classes in it, with which you can work with sensors.
The SensorManager class is an entry point. Through it, you can get information about the sensors, as well as work with them. For example, using the GetSensorBySensorId <> method, you can access the sensor of interest to us. Each sensor must have a wrapper class that inherits from the base Sensor class. Three such implementations already exist in the .NET Interop Sample Library - AmbientLightSensor, Accelerometer3D, UnknownSensor.

The main idea when working with sensors is as follows. When the sensor state changes (connected / disconnected / active / etc), the StateChanged event is generated. This event is necessary to start or stop working with sensors. After communication with the sensor has been established, the DataReportChanged event is generated when new data is received. How often this event will be generated depends on the implementation of the sensor and the driver for it. When processing this event, you can read the state of the sensors and somehow change the operation of the application. For this purpose, the GetProperty method is used. In the parameters of this method, the identifier of the property to be read from the sensor is passed. As a rule, details of calls to this method are hidden in classes that are implemented for a specific sensor.
In addition, each sensor has its own identifier (GUID), by which a device can be identified. When implementing a wrapper class for a sensor, this ID is specified using the attribute. Thus, access to the sensor can be obtained both by explicitly specifying the identifier of this sensor, and by referring to this wrapper class.
/// /// Represents a generic ambient light sensor
///
[SensorDescription ("97F115C8-599A-4153-8894-D2D12899918A")]
public class AmbientLightSensor: Sensor
{
// ...
// ...
// ...
var sensors = SensorManager .GetSensorsByTypeId();
Let's try to implement some examples of working with sensors that are available in the device from Freescale. We will work with two types of sensors - an accelerometer (allows you to measure the angle of the device) and a light sensor (measures the level of illumination in the room).
The first application that we are implementing will display the level of illumination in the form of a burning light bulb on the form. First, subscribe to the state change event in the Sensor API. This is necessary for the application to start working if the sensor is connected on the go. In the handler of this event, we get a list of all the sensors of the desired type and subscribe to the DataReportChanged event. In the handler of this event, we will read the value from the light sensor and write it to the TextBox on the form. Because the event is generated in an additional stream, you also need to make a call to the Dispatcher.Invoke method so that the processing goes in the main stream and we can interact with elements on the form. Thus, we obtain the following code.
private void Window_Loaded (object sender, RoutedEventArgs e)
{
SensorManager.SensorsChanged + = SensorManagerSensorsChanged;
}
void SensorManagerSensorsChanged (SensorsChangedEventArgs change)
{
Dispatcher.Invoke ((System.Threading.ThreadStart) (UpdateSensorsList));
}
private void UpdateSensorsList ()
{
var sensors = SensorManager.GetSensorsByTypeId();
foreach (var sensor in sensors)
sensor.DataReportChanged + = delegate (Sensor sender, EventArgs e)
{
Dispatcher.Invoke ((System.Threading.ThreadStart) (delegate
{
if (ActiveSensorsListBox.SelectedItem == sender)
{
CurrentValue.Text =
( (AmbientLightSensor) sender) .CurrentLuminousIntensity.Intensity.ToString ();
}
}));
};
}
Now in the TextBox on the form the current value of illumination is displayed. Now it’s easy to implement some kind of visualization for this. Using the bindings in WPF, we will display the degree of illumination in the form of bulbs. As a result, we get the following application.


Since it’s very difficult to judge the operation of the application from the photo, I recorded a short video that clearly shows how the sensor responds to the degree of illumination.
Demonstration >>
Another sensor is more interesting - it allows you to determine the degree of inclination of the device on different axes. To demonstrate the degree of inclination, we will take a three-dimensional model of the aircraft for the WPF application and will rotate it in space depending on the sensor. The principle of this application is similar to the previous one - we find the necessary sensors, subscribe to events and, when processing them, record the coordinates in the input fields on the form. After that, we attach the model coordinates to the values of these input fields.
private void UpdateSensorsList ()
{
foreach (var sensor in SensorManager.GetSensorsByTypeId())
{
sensor.DataReportChanged + = delegate (Sensor sender, EventArgs e)
{
Dispatcher.Invoke ((System.Threading.ThreadStart) (delegate
{
if (UseXCoordinate.IsChecked == true)
CurrentXValue.Text = ((Accelerometer3D) sender ) .CurrentAcceleration [Accelerometer3D.AccelerationAxis.X] .ToString ();
if (UseYCoordinate.IsChecked == true) CurrentYValue.Text = ((Accelerometer3D) sender) .CurrentAcceleration [Accelerometer3D.AccelerationAxis.Y] .ToString ();
if ( UseZCoordinate.IsChecked == true) CurrentZValue.Text = ((Accelerometer3D) sender) .CurrentAcceleration [Accelerometer3D.AccelerationAxis.Z] .ToString ();
}));
};
}
}
As you can see from this example, the code for working with sensors has not changed very much. In fact, only the code for receiving data from sensors has changed, and the rest has remained unchanged.



As you can see from the photo when tilting the device, the sensor transfers information to the application and the coordinates of the model change. Thus, we can see the tilt effect of the three-dimensional model.
Demonstration >>
Interestingly, these sensors can use several applications at the same time. Also in one application you can use several sensors. Let's combine the rotation application of a three-dimensional model with a light sensor. In this case, in addition to turning the model, we will show the sun. If the illumination in the room decreases, then the sun will also be hiding. The more lighting in the room, the more intense the sun will shine. Accordingly, the code from the two previous examples is used in this application. Therefore, I will not give the code, but I will immediately show the result.



You can also watch this application in dynamics.
Demonstration >>
With these examples it is clearly seen that working with sensors in Windows 7 is very simple. However, for this you must have a driver for Windows 7 and a wrapper class for the Sensor & Location platform. As a rule, drivers are supplied by the manufacturer of the hardware platform, but the wrapper class can be implemented independently.
As I said before, the entry point is the SensorManager class. With the help of this class you can access the necessary sensors and work with them. This class has methods such as retrieving a list of all sensors, retrieving a sensor by ID or type, requesting to use a sensor, and also an event of a change in the number of sensors in the system.

Each sensor has two main types of identifiers - SensorId and TypeId. TypeId identifies a particular device class. For example, it can be used to obtain all light sensors in the system, or some other types of devices. SensorId is assigned uniquely to each device. For example, if the system has three motion sensors of the same type, then each will have a unique identifier. There is also CategoryId, which integrates sensors into categories.
Each identifier is a GUID. They are set by manufacturers when developing the device and drivers. Thus, you can get a specific sensor only by knowing its ID. Each sensor is represented by the Sensor class. It has general information about the sensor and methods that allow you to get data from generalized collections in untyped form. It is clear that such a data representation is not very convenient for our applications. Therefore, it is customary for each sensor to implement a wrapper class as part of the Sensor API. It is implemented by inheriting from the general Sensor class. The demo examples already have two such implementations - for the accelerometer and for the light sensor. However, in the device that we examined earlier there are also touch buttons, which can also be used. Therefore, let's implement such a class for this sensor.
We will define a new class that will be the descendant of the Sensor class. In order for it to be recognized in the Sensor API, you need to mark it with the SensorDescription attribute, in which indicate the TypeId for this type of sensors. In the Sensor base class, there are two important things for us - the DataReport property and the DataReportChanged event. This property contains data from the sensor, and the event is triggered when they change. The task of our class is to use this data and deliver it to the user of our class in a convenient form. To do this, create another small class that will deal with the analysis of information from DataReport.
We will find out experimentally that when you press button 1, code 1 is generated, when you press 2, code 2 is generated, when you press 3, code 4 is generated, and when you press 4, code 8 is generated. It can be seen that binary digits are used here. Code 0 is also generated if all buttons are released. Thus, we can write the following code.
[SensorDescription(«545C8BA5-B143-4545-868F-CA7FD986B4F6»)]
public class SwitchArraySensor: Sensor
{
public class SwitchArraySensorData
{
private static Guid KeyStateProperyId = new Guid(@«38564a7c-f2f2-49bb-9b2b-ba60f66a58df»);
public SwitchArraySensorData(SensorReport report)
{
uint state = (uint) report.Values[KeyStateProperyId][0];
Button1Pressed = (state & 0x01) != 0;
Button2Pressed = (state & 0x02) != 0;
Button3Pressed = (state & 0x04) != 0;
Button4Pressed = (state & 0x08) != 0;
}
public bool Button1Pressed { get; private set; }
public bool Button2Pressed { get; private set; }
public bool Button3Pressed { get; private set; }
public bool Button4Pressed {get; private set; }
}
public SwitchArraySensorData Current
{
get {return new SwitchArraySensorData (DataReport); }
}
public event EventHandler StateChanged;
public SwitchArraySensor ()
{
DataReportChanged + = SwitchArraySensor_DataReportChanged;
}
void SwitchArraySensor_DataReportChanged (Sensor sender, EventArgs e)
{
if (StateChanged! = null)
{
StateChanged.Invoke (sender, e);
}
}
}
In fact, this class is a wrapper in the Sensor API for the sensor we need. To use it, I must subscribe to the StateChanged event and receive information through the Current property.
To get a list of available sensors of a given type, you can use the GetSensorsByTypeId method of the SensorManager class. In this case, the TypeId of these sensors will be determined based on the specified SensorDescription attribute. Now, using these sensors, we can subscribe to the necessary event and receive data in a form convenient for the application. For example, we can display the state of button clicks on a form.
private void Window_Loaded (object sender, RoutedEventArgs e)
{
var sensors = SensorManager.GetSensorsByTypeId();
foreach (SwitchArraySensor sensor in sensors)
{
switch (sensor.FriendlyName)
{
case "Left Switch Array Sensor": sensor.StateChanged + = delegate (object leftSensor, EventArgs arg)
{
var buttons = ((SwitchArraySensor) leftSensor) .Current;
SwitchState (LeftButton1, buttons.Button1Pressed);
SwitchState (LeftButton2, buttons.Button2Pressed);
SwitchState (LeftButton3, buttons.Button3Pressed);
SwitchState (LeftButton4, buttons.Button4Pressed);
};
break;
case "Right Switch Array Sensor":
sensor.StateChanged + = delegate (object rightSensor, EventArgs arg)
{
var buttons = ((SwitchArraySensor) rightSensor) .Current;
SwitchState (RightButton1, buttons.Button1Pressed);
SwitchState (RightButton2, buttons.Button2Pressed);
SwitchState (RightButton3, buttons.Button3Pressed);
SwitchState (RightButton4, buttons.Button4Pressed);
};
break;
}
}
}
As a result, we get an application that looks like this.

Of course, the example with the implementation of such a sensor is quite synthetic. However, it clearly demonstrates the process of connecting the sensor to the Sensor API.
Also, if you need to implement your own driver for the device in order to connect to the Windows 7 Sensor and Location platform, I recommend that you contact the official resource.
I wish you success in creating your context-sensitive applications!
Demo Applications:
Ambient.zip
Accelerometer3D.zip
Combined.zip
ButtonSensor.zip