Flocking With Lego Mindstorms NXT

I’ve had the same three tabs open in Firefox for over 2 years now. I’ve left these open to remind me to write this post.

The links are to some very useful documentation/libraries and guides discussing low level Lego Mindstorms NXT functions which helped me solve many problems with the project.

The brief of the project was to implement a flocking using a group of Lego Mindstorms NXT robots.

The video is a quick demonstration of the set up and a bit of flocking. Unfortunately, I never really got a very large area to test it in, so the running time was always quite short, however, I think it’s still a good demonstration of the concept.

  • Connect to the ‘router’ brick via USB using MATLAB and the RWTH Mindstorms NXT Toolbox library
  • Using a modified version of Daniele Benedettelli’s NXT MATLAB Bluetooth Router library connect via Bluetooth from the ‘router’ brick to the ‘slave’ bricks
  • Acquire the image from the HD web cam using Processing
  • Perform some colour recognition and ‘glob’ finding using the JMyron Processing Library
  • Calculate the positions of the robots from the locations obtained by processing the image
  • Transmit the positional data of all robots via the router NXT to each of the slave NXTs
  • Process the positional data on the robots themselves using NeXT Byte Code
  • Process Ultrasonic distance information on the robots to detect obstacles
  • Produce a new direction vector as an average of the robot’s current direction, position of other robots, and obstacle avoidance

The full report as was handed in for my dissertation is given below. Leave a comment on this page if you are interested in having any of the source code.

Flocking with Lego robots Scott Snowden (Supervisor: Dr. Ben Milner)


  1. Introduction
    1. Flocking In Nature
    2. The Importance of Flocking
    3. Reynolds’ Boids
    4. Previous and Current Projects
    5. This Project
  2. Locating Robots
    1. Description of the Problem
    2. Potential Solutions
      1. Vision
      2. Sound
      3. Light
      4. GPS
      5. Dead Reckoning
      6. External positioning
      7. Summary of locating techniques
    3. Preliminary Experiments
      1. Light Following Experiments
      2. Dead Reckoning Experiments
      3. Camera Tracking Experiments
    4. Refined Solutions
  3. Communicating Between Robots
    1. Description of the problem
    2. Potential Solutions
      1. Infra Red
      2. Radio Frequency
      3. Bluetooth
    3. Preliminary Experiments
      1. C++ Library
      2. Matlab
    4. Refined Solutions
  4. Avoiding Collisions
    1. Description of the problem
    2. Potential Solutions
      1. Bump Sensors
      2. Ultrasound
      3. Positional Data
      4. Combining Multiple Techniques
    3. Preliminary Experiments
    4. Refined Solutions
    5. Tests
    6. Discussion of Results
  5. Flocking
    1. Description of the problem
    2. Swarming
    3. Potential Solutions
      1. Subsumption Architecture
      2. Vectorised Control
    4. Using tasks for multi threading
    5. Tracking and Heading data
    6. Tests
    7. Discussion of Results
  6. Results and Analysis
    1. Individual Movements of robots
    2. Collision Avoidance
    3. Group Movements
  7. Discussion and Outcomes
    1. Use of the Lego NXT platform
    2. Processing and Matlab for image processing and communication
    3. Bluetooth as a high speed mobile network
    4. Flocking as a means to control multiple autonomous vehicles
  8. Conclusions
  9. References


This project introduces the idea of utilising flocking to control a small group of Lego robots. It aims to show that flocking can be achieved with inexpensive and simple hardware. There is currently a large amount of research in to robot swarms and flocks but all use dedicated hardware. This approach uses currently available hardware with very basic capabilities.


Many thanks go to Ben Milner for his support throughout this project. To Peter Trollope for his support with the robots. Thanks also to the authors of the open source libraries and development environments used here.


The motivation for this project come from a graphics module which implemented a flocking algorithm to control a large number of artificial bacteria in a game. Flocking requires no central control as each member of the flock determines its actions completely independently of every other member. This project will investigate different techniques required to produce artificial flocking using the Lego Mindstorms NXT robotics platform.

Flocking In Nature

In nature, the movements and activities of a group of animals can often be seen to exhibit behaviour which appears to require central organisation or significant cooperation and communication between the members of the group. An excellent example of this can be seen in termite mounts where an extraordi- narily complex structure `emerges’ from the simple behaviour that each individual termite exhibits. This behaviour can be visually impressive in flocks of birds, schools of fish and swarms of insects. Goldstein (1999) describes emergence as:

“…the arising of novel and coherent structures, patterns, and properties during the process of self-organzation in complex systems.”

Swarming is a primitive form of emergent behaviour. In swarms of bees, for example, each bee will attempt to remain close to the other bees around it and also avoid collisions with those bees. There is however, no desire to align with the other bees such that an individual will be travelling in the same direction as the swarm. This is what gives a swarm their cloud like appearance and lack of any apparent course. In the case of flocking, however, individuals do have this additional desire to remain aligned with others. This keeps the flock moving in a direction, usually towards some particular location. There are a number of benefits to moving collectively in this way. For birds, a large flock can provide safety in numbers (Sumpter 2008), effective hunting and more efficient flying (Cutts and Speakman 1994).

Humans provide some excellent examples of emergent behaviour, the Internet has developed out of the independent actions of many individuals with no central control. Social bookmarking brings order to an unorganised collection of web pages. Where pages are tagged with keywords, a small vocabulary emerges without any centralised guidance (Robu et al. 2009). Flocking behaviour has also been observed in large crowds of humans where it has been shown that only 5% of a crowd can influence the movements of the other 95% (Dyer et al. 2008).

The Importance of Artificial Flocking

Artificial flocking can provide the same benefits as those found in nature. As with natural flocking, artificial flocking fundamentally requires that each individual in the flock acts independently of all others and there exists no central controller or necessity to communicate. Flocking has potential in a number of areas in modern robotics. It can be used to effectively distribute small, cheap robots over large areas in potentially hazardous environments. Generally, the idea behind artificial flocking is to use many inexpensive, disposable robots and have as large a flock as possible. It can provide large mobile networks of distributed computing to remote locations. It can provide further benefits if, in addition to flocking the robots are given tasks, for example to collect samples or locate gas leaks. Kernbach et al. (2008) describe an artificial symbiotic system where multiple small robots act cooperatively to overcome challenges or to operate more effectively as a group.

Developments in battery, solar and motor technology mean that robots can now be very small, yet contain a massive array of sensors including cameras, GPS, compasses, gyroscopes, gas, radiation, pres- sure and temperature sensors. A selection of these could be integrated in aerial, ground or submersible robots in order to survey their environment. By combining these with manipulators or sampling equip- ment, the robots could carry out simple tasks such as clean up operations (Kakalis and Ventikos 2008). Robots with different skills could be combined in a flock and called upon when their skills are required.

There is potential for this type of flocking on many scales, including the nanoscale. Some of the most important uses might be in search and rescue, surveillance in hostile environments or for planetary exploration (Vassev et al. 2008). There is also a great deal of research in to modular reconfigurable robotics (MR), another application of robot swarms. This field looks at collections of robots which are allowed to attach to one another to produce a larger tool with additional uses. Yim et al. (2003) introduces the idea of using MR to control robots in space.

Reynolds’ Boids

Some of the first work on simulating flocking was carried out by Craig Reynolds (Reynolds 1987). In 1986, Reynolds created a computer model of coordinated animation called Boids. This was the basis of many other flocking simulation projects for years to come. Reynolds’ algorithm used three very simple rules to control the individual flockmates (the boids) these rules also provide the basis for the flocking algorithm used in this project. The rules require only that a particular boid considers local flockmates and not the flock as a whole. It may look at the 10 closest boids, for example, or boids only within a certain distance. The diagrams in figure 1 describe the rules that Reynolds uses.

Cohesion: steer towards the average mass of the flock

Cohesion: steer towards the average mass of the flock

Alignment: steer towards the average heading of the flock

Alignment: steer towards the average heading of the flock

Separation: steer to avoid colliding with other flockmates

Separation: steer to avoid colliding with other flockmates

Figure 1: The three basic rules of flocking as described by Reynolds In addition to these rules, others can be added which can give more controllable movements to the flock, for example to follow paths or complete specific tasks.

Previous and Current Projects

Flocking has been used in a number of research projects in the past and is currently being developed for use in various real world applications.

As with many areas of technology, robot swarms are receiving great military interest. The robustness of these systems combined with the low cost of replacing individual members is attractive in a harsh combat or surveillance environment. The MOD ran a competition in 2008 to “help solve challenges faced in a hostile urban environment”. Solutions involving swarming won two out of the three awards given. Swarm Systems Ltd now has a contract with the MOD to develop and demonstrate an `Autonomous Hovering Micro Air Vehicle Swarm’.

Robot swarms have been used for search and rescue after earthquakes and other urban disaster en- vironments. In these situations, survivors must be found as quickly as possible. With swarming large, inaccessible areas could be covered by many small robots in a short period of time (Blitch 1996). In some implementations heat, sound and movement sensors are used to seek for signs of life in collapsed buildings (Murphy et al. 2000). These systems are currently used in disaster situations to aid rescue workers.

The basis of flocking is also being researched for use in air traffic control for personal aircraft. The system aims to automatically keep light aircraft a safe distance away from each other. With increasing numbers of aircraft in the skies and development of personal aircraft, traditional air traffic control systems are failing to meet demand. Due to the decentralised approach of flocking much less central processing and knowledge of every aircraft’s positions are required.

The automotive industry is also investigating the use of flocking to control vehicles in a way that allows them to move through an environment avoiding collisions with obstacles and each other. Nissan takes inspiration from schools of fish in their concept car, EPORO (Nissan 2009). Using laser range finding and Ultra Wide Band radio communication, EPORO uses the three basic rules of flocking to control its behaviour. Nissan sees this technology being used in cars of the future to keep them a safe distance from obstacles and each other.

This Project

Previous projects that implement flocking with robots use expensive and complex equipment to construct a flock. Unfortunately, this negates many of the benefits of using a flock of many robots instead of one or two more complex ones. Ultimately, a flock would consist of very inexpensive, simple and effectively disposable robots. This project aims to solve some of the challenges faced when implementing a flocking algorithm by using relatively simple and inexpensive components.

The project will utilise the Lego Mindstorms NXT robotics development platform. The NXT is designed for educational use, and so is reasonably inexpensive and simple to use out of the box. It is, however, a very powerful and versatile hardware platform ideal for robotics prototyping. It has a range of sensors and can be programmed in a number of different languages aimed at different skill levels. To use this platform to perform flocking will be a significant challenge due to the limited components, but will demonstrate that it is possible to use simple robots.

The ultimate goal of this project is to demonstrate a functioning flocking algorithm with multiple Lego robots each of which are acting independently of each other with no external processing or guidance in where to move. The project will split this task in to constituent parts and solve these individually before recombining them to give the final solution. It will be shown that the robots can successfully swarm with each other whilst avoiding collisions and then developed to show that the robots can also flock.

The Four Challenges

This project splits neatly into a selection of different computing problems. These problems will initially be solved independently of each other and combined for the final demonstration of flocking. There are four challenges, and these are each given a section in the report.

  1. Locating the robots which will involve determining the robot’s positions either absolutely or relative to one another.
  2. Communication of the positional data either between the robots or from a central device to the robots.
  3. Avoiding collisions which will be required mainly to avoid collisions with other obstacles in the environment and also with each other.
  4. Flocking uses data collected by the robots to determine where they need to move next to continue flocking.

These are all problems which have been researched and solved separately using different techniques varying in complexity. The solutions used in this project are determined by the available hardware and the accuracy of the technique.

Programming the robots to follow a flocking algorithm should be reasonably simple in comparison to the other three challenges. This requires bringing together robot sensor information and location data, applying the rules of flocking and controlling the motor’s movements. It is important that the robots respond appropriately to the data that they collect and prioritize their actions according to this information. For example, it is more important for a robot to avoid colliding with the environment than it is for it to complete any tasks it may have. This will require an architecture which controls the robot in the most natural and safe way possible.

This Report

This report is structured in the same way as the project. The four separate problems are discussed and solved individually in four different sections. Further sections discuss the combination of these solutions along with experiments, discussions and analysis of the culmination of these efforts. For each of the technical solutions provided, there is first a description of the problem followed by a discussion of the possible solutions. Some of these solutions are selected for further experimentation and any results are given and discussed. The chosen solution is presented and then evaluated and verified. Graphs, figures, tables, equations and algorithms are labelled with captions and numbered throughout the report. The diagrams involving robot images are not necessarily to scale, these are used for visualisation purposes only. On some graphs, datum points have been removed and replaced with lines for clarity.

Locating Robots

This section discusses the challenge of making a particular robot aware of the others around it. Potential solutions to this problem are discussed along with methods used in other projects and applications. Some preliminary experiments are carried out to determine the suitability of each method and the best is presented along with further analysis and refinement.

Description of the Problem

The first challenge is to find a suitable and accurate method of having each robot locate its neighbours. This does not necessarily require that each robot know its or other robots absolute location. In fact, a relative location is much more important. In nature, animals and birds locate each other with sight, giving only their relative positions. This is all that is required to flock with them and avoid collisions. It may, however, require a more complex system to accurately determine local robots locations than to calculate the absolute positions of each robot and communicate this information between them.

Potential Solutions

There are many possible solutions to this problem, and a number of these have been used in other projects or are currently used in different applications. For this project, the Lego platform limits which techniques are realistic options. A discussion of some of the potential solutions is given below.


As in nature, each robot could use sight to locate its flock mates. This would require a camera on every robot and software capable of recognising other robots in the scene. A variant on this method could determine the robot’s absolute location in a 3D space using image recognition of landmarks as used in Michael Werman and Qiu (1999). Visual localisation could potentially provide the most robust system, but would also be the most complex. The time and expense required for this puts it out of reach of this project.


Sound localisation could also be used in a way that would mimic the use of interaural time difference in animals to locate sounds (Blauert 1997). Robots would constantly ping, and would be able to detect the location of other pings using two microphones as ears (Murray et al. 2004). This method can only determine the angle to another robot. By using an RF signal to start a timer before receiving the ping, distance can also be calculated (Paul M.Maxim and Speiser).


The simplest method for locating local robots would be to use light to seek other robots. This would require that each robot has a bright light attached to it and also one, or ideally two light sensors (Mike Brown). The robots would be programmed to find and follow the brightest light. In a flocking situation this should always be towards the mass of the flock.


On a larger scale where robots are used exclusively outside GPS could be used. Although only accurate to a meter at best, this would be suitable for some applications. The accuracy could be further improved using other sensor data as in Swarm System’s MAV . Using this system would then require that each robot communicate its position to its neighbours.

Dead Reckoning

Dead reckoning is one of the most basic positioning tools; it uses only information about the distance travelled and in which direction (Lucas 2001). This technique is used in many projects due to its simplicity. The downside, however, is accuracy. Modelling the movements of a wheeled robot precisely requires a very detailed model taking in to account wheel slippage, terrain alterations, rotation sensor differences and other mechanical inaccuracies. Again, other sensor data could be used to improve the accuracy of dead reckoning. Blitch (1996) uses a fibre optic gyroscope for improved dead reckoning accuracy. With a dead reckoning technique, each robot would then have to communicate its position to its neighbours.

External positioning

Another possible solution would be to use a system which could monitor the positions of all the robots, and communicate these positions to the individual robots. This idea is not ideal in terms of flocking because it requires a third party to gather and communicate this information, something that is not seen in nature. However, the idea is not to coordinate the robots remotely, this system would just be used to provide positioning data. Brezak et al. (2008) and Klancar et al. (2004) use a camera mounted above the area in which the robots are allowed to move. Each robot has a coloured card mounted to the top of it which is located by processing the video input. Control instructions are sent to the robots via a wireless radio link. The image processing step could be simplified by using an infra red camera and putting infra red beacons on top of each robot.

Johnny Lee uses the Nintendo Wii Remote (Wiimote) in his multipoint finger tracking project. The Wi- imote connects to a PC via Bluetooth and has built in point tracking, but only for up to 4 points. The alternative is to use a standard PC webcam and remove the Infra Red filter and replace it with a visible light filter, (Johnson) provides some good information on this. This will provide an image with only infra red light, allowing simple tracking of IR reflections or lights.

Summary of locating techniques

Limitations on time and cost restrict the techniques that this project will be able to use for locating the robots. The sensors that come with the Lego Mindstorms NXT kit will be available to use. These are limited to:

  • 1× Microphone
  • 1× Light sensor
  • 1× Ultrasonic distance measuring sensor
  • 3× Motor with rotation sensor

Some other equipment could also be used to track robots locations such as a Wiimote or infra red camera. The limitation in equipment rules out some of the techniques listed above. The techniques that are going to be investigated further for this project are light following, dead reckoning and fixed camera tracking.

The success of these different techniques is likely to vary significantly. Using the light following technique, it may be difficult to achieve good results. Ambient light levels would have to be very low to give good detection of robot’s individual lights. Additionally, the brightest light may not necessarily be the centre of the flock as hypothesised and robots that are a long distance from the flock may never find it. (Mike Brown) has some success with this method. Dead reckoning could have equally significant issues. It is not a reliable or accurate method of positioning and will fail immediately if a robot collides with anything.

Preliminary Experiments

In order to choose the most suitable technique for acquiring the positions of the robots, it was necessary to carry out some preliminary experiments. These will be used to judge the accuracy and reliability of each technique and to select the best to develop further.

Light Following Experiments

The first experiments involve programming the robot to seek out and follow a light source. This would demonstrate the effectiveness of light following to locate other robots in the environment. A bright light was used in a darkened room and moved around the robot to see if it could track the light. Algorithm 1 describes the program written to test this method.

Algorithm 1: Following brightest light with a single sensor

	Take light reading;
	Turn left;
	if light reading is now greater then
		Drive forward;
	if it is less then
		Turn right twice;
		Drive forward;

Using this algorithm, the robot would only drive forward and turn right. This did not allow it to follow the light correctly, although providing that the light remained in front of the robot, it would continue to move towards it. If the light moved around to the right, the robot would turn right to follow it. If the light moved to the left, the robot would lose it and have to turn right through 360 degrees to find it again. A second algorithm was written in an attempt to combat this problem.

Algorithm 2: Following light taking two samples then moving

	Turn left;
		Take light reading;
	Turn right;
		Take light reading;
	if right reading > left reading then
		Turn right;
		Turn left;

This was more successful than the previous algorithm, but still did not produce satisfactory results; the robot would only seem to find the brightest light very slowly, and with erratic movements. This would not be suitable for the project.

Since using a single light sensor appeared to be unreliable, slow and gave unnatural movements, the robot was modified to carry two light sensors each facing 45 degrees from straight ahead. This would allow a comparison between the light readings to be made and the robot to smoothly adjust its course accordingly. Algorithm 3 describes this method.

Algorithm 3: Following light using two sensors on the front of the robot

	difference = right reading - left reading;
	if difference > 0 then
		Turn right;
	else if difference < 0 then
		Turn left;
		Drive forwards;

This program was much more successful in terms of having the robot follow the light, although still not very smooth, it would always find the source of light in a room eventually.

From these experiments with the light sensors, it does not appear viable to use this as a method of tracking robots. Movements are too unrealistic and unreliable. Additionally, this method would also require extra equipment on each robot in the form of a beacon of light. It would also require that the robots are used only in a dark room, something which is not practical to achieve for the demonstration.

Dead Reckoning Experiments

Dead reckoning provides a little more promise than light following since it can be performed using only the Lego equipment, and can be used in any conditions. It does, however, have its drawbacks. To calculate the movements of the robot, we rely on accurate measurements of the number of rotations of the wheel and of the circumference of the wheels. The Lego NXT Servo Motors contain a built in rotation sensor which is accurate up to +/- one degree. Calculating the circumference of the wheels is a simple task. By running a program on the robot which drives in a straight line for a number of rotations (r) and measuring the distance travelled (s) the circumference (C) of the wheels is given by,

C = s/r (1)

In order to gain accurate measurements this experiment was run multiple times at various motor speeds. The initial value used for the circumference was calculated from a measurement of the diameter of the wheels.

Table 1 shows the actual distance travelled with the circumference used from the measurement on the wheel. The recalculated circumference is shown; taking an average gives 173.43mm. Due to the lack of floating point numbers on the NXT, for the second experiment this is rounded down to 173mm.

Table 1 – Actual distance travelled with the circumference used from a measurement on the wheel when programmed to drive 2000mm
Calculated Circumference (mm) Degrees left Degrees right Actual Distance (mm) Speed Recalculated Circumference (mm) Battery
179 4034 4030 1950 50 174.11 Low
179 4031 4028 1945 50 173.77 Low
179 4032 4033 1946 50 173.73 Low
179 4026 4030 1925 50 172.05 Low
179 4029 4030 1942 50 173.50 Low
Table 2 – Actual distance travelled with a recalculated wheel circumference of 173mm. Motor speeds of 100 and 50 were used
Calculated Circumference (mm) Degrees left Degrees right Actual Distance (mm) Speed Recalculated Circumference (mm) Battery
173 4187 4191 2020 100 173.60 Low
173 4186 4187 2025 100 174.13 Low
173 4183 4187 2028 100 174.45 Low
173 4186 4186 2023 100 173.98 Low
173 4186 4186 2018 100 173.55 Low
173 4171 4171 2015 50 173.92 Low
173 4171 4170 2015 50 173.94 Low
173 4168 4171 2015 50 173.98 Low
173 4172 4169 2015 50 173.94 Low
173 4170 4171 2010 50 173.50 Low

Table 2 begins to show a very accurate measurement of 2000mm. The average actual error for speed 50 is only 0.7% and for speed 100, 1.1%. During the tests, it appeared as though the robot was running on after the number of rotations had hit their limit and the motors had stopped. This meant that the actual distance travelled was further than it should have been. This can also be seen from the results. By dividing 2000mm by 173mm and multiplying by 360 degrees, we get the number of degrees that are required to travel 2000mm. In the experiment, this was always less than the actual number of degrees counted. The calculated degrees was 4161 and the average actual degrees was 4185 for left and 4187 for right at speed 100 and 4170.4 (exactly the same for both wheels) at speed 50. This indicated that the overrun was a factor of the speed. In the next experiment this overrun was accounted for by subtracting a factor of speed/4 from the distance to be travelled.

Table 3 – Actual distance travelled with stopping overrun factor of speed/4
Calculated Circumference (mm) Degrees left Degrees right Actual Distance (mm) Speed Recalculated Circumference (mm) Battery
173 4146 4146 2000 50 173.66 Very Low
173 4143 4145 2005 50 174.18 Very Low
173 4145 4141 2005 50 174.22 Very Low
173 4140 4145 2000 50 173.81 Very Low
173 4149 4140 2000 50 173.72 Very Low

Table 3 shows the much more accurate distances achieved with the circumference of 173mm and a stopping factor of speed/4. The average actual error for these measurements is only 0.1% which, for the purpose of this project is more than accurate enough.

From these experiments, it would appear that the rotary encoders on the motors are accurate enough to successfully track the position of the robots during flocking. The fundamental issues with dead reck- oning remain, however. It cannot compensate for collisions which change the robot’s direction, and it may not be accurate over a range of surfaces and terrains.

Camera Tracking Experiments

In order to test the viability of using a camera to track the robot’s positions, a simple experiment was prepared. This method relies on tracking colours in an image and using these to calculate the positions of robots within the space covered by the camera. For this experiment a webcam and piece of black card were used. The image tracking program is written in a Java based language, Processing. In order to acquire the camera image an additional library (Nimoy 2001) was utilized. In addition to image acquisition it contains features to track globs of similar colours in an image. The purpose of this experiment is to determine if the software is fast and accurate enough to be suitable for tracking the robots.

Bounding boxes on black card

Figure 2 – Bounding boxes on black card

Figure 2 shows that the program has successfully found the large groups of similar coloured pixels and bound them with rectangles. The colour to track was set to black with a small tolerance. Therefore, it has also tracked very dark areas like the top right and bottom left corners. Even on a low specification laptop, this program has a high enough frame rate to track fast moving objects in the scene. It is also very accurate provided that the colour of the object being tracked remains the same. Some issues arise if the card is tilted at different angles since this reflects light differently and causes the card to appear lighter. This is not likely to cause a problem in the demonstration since the camera will be fixed and the white colour of the robots will not change so much depending on the angle (reflecting more light will only make them appear brighter).

Refined Solutions

From the preliminary experiments, it was clear that the best method for locating the robots would involve colour tracking with a camera. This would give Cartesian coordinates of the robots on the floor. It would not, however, be able to determine their orientation. To extract orientation data using a camera would either require that the camera locate `front’ markers on the robots or that it extrapolate the direction of movement from the robot’s previous positions. These techniques would likely not be accurate or fast enough to reliably keep track of the heading of each robot (very important for flocking). Instead, the dead reckoning technique described above will be used to calculate this information.

The program used in the preliminary experiment was developed and refined for tracking the positions of all of the robots and preparing this data to be transmitted. The pseudocode for the main loop of the final program is shown in algorithm 4.

	On first mouse click, set tracking colour and position of robot 0;
	On second mouse click, set position of robot 1;
	On third mouse click set position of robot 2;
	foreach robot do
		Find all of the globs of the tracking colour;
		Find the closest glob to this robot;
		Calculate the glob centre;
		Set this as the robot’s new position;
	Write the positions of all robots to a text file;

In addition to this, the program also automatically adjusts the tolerance of the colour tracking in accordance with the number of globs that it finds in the image. If there are more globs than there are robots, then the tolerance is reduced and vice versa. The program draws an overlay on the camera image of all of the globs that have been found and also the current positions of the robots. This aids in verification and debugging. The robots positions are output such that the bottom left of the camera image is position (0,0) and the top right is the maximum resolution of the camera e.g. (640,480). The format of the position output file is as follows

e.g. 293,150,209,404,323,238,


The program written to track the locations of colours simply finds all of the globs of a particular colour and then finds the centres of these globs. Figure 3 shows an image with some similarly coloured red pixels, these are bounded by a box with the centre marked. The x and y coordinates for these centre points are saved in an array. For each frame, the positions are also written to a text file. In order to ensure that the software can track three robots

Figure 3: Red pixels on a blue background. First bounded by a box then the centre of this box is used as the centre of the robot

Figure 3: Red pixels on a blue background. First bounded by a box then the centre of this box is used as the centre of the robot

The positions of the squares are recorded to a text file and the centre of this box is used as the plotted out in a seperate program. The plot ought to show the centre of the robot same path that the squares took.

Discussion of Results

The program executes successfully and finds all of the squares on the paper. When moved slowly across the camera it can track the centre points of the squares. There are some small issues with lighting. If the angle of the paper is changed, then the colour of the squares alters slightly and the software loses track of them. For the demonstration, however, the camera will be in a fixed position facing the floor, lighting is also from the ceiling and is uniform around the camera. This will give a much more stable colour to the robots and should not present an issue. If there is any occurrence of a similar colour in the background of the image, then this will also be tracked and interfere with tracking the squares. Again, this will be solved when the camera is mounted on the ceiling since the floor will be a uniform colour and the environment free from interfering colours. The tracking software also fails if the squares are moved too quickly across the scene. This could potentially be an issue, but can be solved by adjusting the robot’s speeds such that they are always tracked sufficiently quickly.

3 virtual boxes surrounding red squares on a piece of paper. Each bounding box is marked with a number, 0, 1, 2

a. Camera image

3 different coloured lines showing the positions of the red squares tracked by the software

b. Recorded Positions

Figure 4: a. Camera image showing the red squares being tracked and the labels of 0, 1 and 2. b. The positions as recorded in a text file plotted using a different colour for each tracked square.

There are still a few disadvantages to using this technique, the biggest being that it requires a com- puter, camera, and communication protocol to track and communicate the robot’s positions. This adds some complexity to the project, although the improved speed and accuracy of tracking will justify the extra time and effort spent. Another small issue is that any objects put in the arena in addition to the robots that are a similar colour to the robots will cause interference with the tracking.

Communicating Between Robots

This section presents a discussion of the potential solutions to the problem of communicating positional data between the robots. Some experiments are performed and analysed, the chosen solution is discussed and validated.

Description of the problem

If a location technique is used which requires communicating position data to robots, then a communi- cation protocol will have to be used. Again, there are a number of possible technologies that could be used for this. The choice will be determined by cost and complexity.

Potential Solutions

The following selection of possible solutions have been widely used for communications in many dif- ferent fields and the discussed network topologies all have their merits for particular applications. The chosen solution is limited by the available hardware on the NXT.

Infra Red

The original Lego Mindstorms robots could communicate with each other via infra red (Lego 1998). Using a serial over infra red type of protocol, data can be transmitted fairly simply and effectively. Line of sight is required for communication and interference can be an issue. Although the first version of the mindstorms robot used infra red for communication, this has been replaced with Bluetooth in the NXT. Therefore, this method will not be used in this project.

Radio Frequency

Communication over radio frequency is widely used in many applications and can be a cheap, simple solution. Robots could broadcast their positions including an identification code. Each robot would listen for broadcasts and update a table with local robot positions. In this project, using RF communication would require extra hardware for each robot. Benavidez (2008) Uses XBee-PRO 802.15.4 modules, developed by Maxstream Co. using a short range communication protocol called ZigBee. Technically this would be a suitable technology for this project, however would add significant cost and complexity.


The mindstorms NXT come with Bluetooth communication hardware. This can be used to communicate between robot and PC or for inter robot communication. There are a number of limitations to the NXT Bluetooth protocol. Data can either be transmitted or read directly into a program via functions or using `mailboxes’. Mailboxes act as queues where data is stored until it is read by a program. Toledo (2006) provides a comprehensive analysis of the NXT Bluetooth protocol. The analysis shows that the protocol is not reliable and so it may be necessary to write code that ensures reliability. The minimum data rate for inter robot communication would always be sufficient to transmit the small amount of data required for robot positioning.

The communication between robots is described as a master – slave configuration where the master robot always initiates a connection and the slave just accepts it. A master robot can initiate up to three connections to slave robots. A slave robot can only connect to one master robot. The Not eXactly C (NXC) programming language for the NXT provides functions for searching for and connecting to other NXT bricks. This allows for a program that can communicate with more than three other robots, by breaking and making connections between them. This leads to the possibility of different network topologies used to communicate data.

Star Network

In a network using only three Lego robots, a star topology could be used. This is a representation of the master- slave system that the robots use. The master is the centre of the star and the slaves its points. Data is transmitted from the centre outwards and acknowledgement packets can be sent back to the master. This method is simple, but in this case not scalable since each robot can only have three slaves. Additionally, if the master robot is lost or the connection broken, then the two slaves are effectively disabled too. A tree network extends on this idea.

Figure 5: A Star network topology

Figure 5: A Star network topology

Tree Network

Taking the star network and allowing connections to be broken and remade would allow a tree network topology. Each robot would have a parent robot and a number of child robots. Data would be transmitted from the top parent to its children. The connection between these would then be broken and a new connection established from each of these children to their children. This network still suffers in case of a missing connection and would require restructuring. a. First set of connections b. Reconnected to children

a. First set of connections

a. First set of connections

b. Reconnected to children

b. Reconnected to children

Figure 6: The connection states of a tree topology

Chain Network

Connecting robots together in a predefined sequence would be a simple method of transmitting data to every robot. The data would be passed along the line in the same way as a line of people might transport objects a short distance. This would require each robot connect to a sender robot to receive the data, disconnect then send the data on to a receiver robot. This technique, however, is not robust and if a robot anywhere along the chain fails to communicate then the whole network breaks down. If an acknowledgement transmission was sent back from the last robot, then the chain could restructure itself in case of a broken link.

Figure 7: A diagram of a chain network. Only alternate links are connected at any one time. This does not take full advantage of the available connections

Figure 7: A diagram of a chain network. Only alternate links are connected at any one time. This does not take full advantage of the available connections

Mesh Network

Benavidez (2008) uses a mesh network to provide communication between robots in a swarm. This system is seen as robust and scalable and can transcend swarms of robots in different environments. Robots can communicate with each other using a decentralised approach that relies on creating connections between themselves with no priorities or predefined routes. Each robot has the responsibility of transmitting the data it has to every other robot. This type of network means that robot losses are not a threat to communication.

Figure 8: A mesh network showing a snapshot of possible connections made at any one time.

Figure 8: A mesh network showing a snapshot of possible connections made at any one time.

Preliminary Experiments

The experiments for this section will all be using Bluetooth to transmit the positional data of the robots from the PC to the individual flockmates. As shown above, there are a number of potential possibilities for network configurations. The choice of configuration depends on the speed of different aspects of the Bluetooth network. In an ideal situation, the NXT would be able to make connections, send the data, receive data, and disconnect all within 100ms. This would allow a mesh network to function, the most stable and robust of all networks. However, it is unlikely that this sort of time could be achieved since connecting and disconnecting will take around a factor of 10 times more than this.

C++ Library

Initial experimentation on communicating with the NXT involved using a Bluetooth dongle on the PC connected to a master robot. This network is effectively a star, as described in figure 5. The master robot would receive all of the positional data from the PC and then retransmit it to each of the other robots in the flock in turn. This configuration requires that the master robot initiates the Bluetooth connection to the PC and the two other robots. This may pose a problem since data needs to be transmitted from the PC to the master robot, in the opposite direction to that specified in the Bluetooth API.

Using Anders (2009) C++ Bluetooth communication library to connect to the NXT from a C++ pro- gram gave a successful connection and allowed transmission of data from the PC to the master robot. An NXC program on the robot then relayed that data to the slave robots which receive and display it. They then send an acknowledgement message back to the master. The master relayed these acknowledgements back to the PC.

This connection is very unreliable; only around 50% of acknowledgements make it back to the PC. It is also very slow with latencies of around a second. Forcing messages from the PC to the robot appears to cause the greatest losses. This solution is much too slow to be of use for transmitting real time positional data. Some of the problem may lie in the use of the Bluetooth radio on the NXT. In this set up, each robot has to alternate between transmitting and receiving which requires a switch in the state of the Bluetooth radio. This switch takes around 40ms (Toledo 2006), a long time in terms of message transmission.

In an attempt to improve the speed and reliability of the connection, a second set of programs was written which did not send acknowledgements back to the PC. This means that the slave robots do not have to switch their radio states, although the master is still required to. Even with these optimisations, the data loss is still huge, with just over half of the transmitted packets making it to the last slave robot. This is likely due to data not being received by the master robot while it is transmitting data on to the slave robots. A potential solution for these losses could be to combine camera tracking and dead reckoning. While the robot is waiting to receive the next set of positional data, it can keep tracking its location using dead reckoning. This would not, however, be able to track the locations of the other robots.


Some further research in to methods for sending data to the NXT from a PC found a library for Matlab specifically for communicating between Matlab and the NXT via Bluetooth. The RWTH – Mindstorms NXT Toolbox is designed for educational purposes and sup- ports communicating with the NXT brick using Matlab over Bluetooth, and more importantly, USB. Using USB to connect to the robots is a much faster and more reliable method than Bluetooth. It im- mediately removes a great deal of issues that Bluetooth presents and simplifies the challenge of com- munication significantly. One NXT brick will be connected directly to the PC via Bluetooth and will be stationary. This brick will act as a router. It will connect with the three robots via Bluetooth. This is illustrated in figure 9.

Figure 9: Configuration of router network using a USB connection to replace problematic Bluetooth.

Figure 9: Configuration of router network using a USB connection to replace problematic Bluetooth.

This configuration should theoretically be faster than using Bluetooth, not only because USB has a lower latency and higher bandwidth but because the Bluetooth radios will no longer need to switch between transmitting and receiving. It should also be more reliable since the slaves are always ready to receive and so will not block and lose messages.

The code for this experiment is based on work by Daniele Benedettelli (2009). There is three parts to the program: a Matlab script which transmits data via USB to the router NXT brick. A router NXC program which runs on the router brick, this transmits the data received from the PC to three slave robots via Bluetooth. Finally, a slave program runs on the three other NXTs which simply displays the data received over Bluetooth on the screen. This is a much neater solution and complies with all of the Bluetooth and NXT protocols for communication.

In order to test the speed and reliability of this connection method, a test was executed using 2 NXT bricks. One of these was connected via USB to the computer and the other connected to the first via Bluetooth. Messages were sent for 2 minutes to the robots and the number sent and lost was recorded. To discover the maximum message throughput, the time between transmitting messages was reduced over a number of repeated runs. The results of this experiment are shown in table 5. This table shows that the message losses are very small even at very high transmission rates. Loss rates, however, increase significantly if messages are transmitted faster than 50 per second. The shortest possible time between transmitting messages with very little loss is 31ms.

Table 5 – Results of timing and reliability test on USB – Bluetooth router configuration
Wait in router bot (ms) Wait in slave bot (ms) Pause between sending message (ms) Messages sent Messages Recieved Router Messages Received Slave Messages Lost Time per. message (ms)
Wait in router bot (ms) Wait in slave bot (ms) Pause between sending message (ms) Messages sent Messages Recieved Router Messages Received Slave Messages Lost Time per. message (ms)


10 500 245 245 245 0 489.8
20 10 400 301 301 301 0 398.7
20 10 300 393 390 390 3 307.7
20 10 200 576 576 576 0 208.3
20 10 100 1188 1188 1188 0 101.0
20 10 50 1923 1923 1923 0 62.4
20 10 20 3826 3816 3816 10 31.4
20 10 20 3824 3690 3688 136 32.5
20 10 20 3846 3841 3841 5 31.2
20 10 20 3852 3852 3852 0 31.2
20 10 10 7651 5452 5452 2199 22.0
10 10 20 3844 3785 3785 59 31.7
5 10 20 3848 3820 3820 28 31.4
0 10 20 3857 3848 3845 12 31.2
0 10 20 3861 3847 3844 17 31.2
0 10 20 3846 3836 3836 10 31.3
0 5 20 3881 3871 3871 10 31.0
0 5 20 3829 3818 3813 16 31.5
0 5 20 3872 3792 3792 80 31.6
0 0 20 3862 3842 3842 20 31.2
0 0 20 3854 3784 3784 70 31.7
0 0 20 3852 3811 3811 41 31.5
0 0 10 7578 5765 5676 1902 21.1

Refined Solutions

The programs described above have been modified slightly to make connections even easier and optimise the code. Using the results from the timing experiment, the robots are set up with the highest message rate possible. The Matlab script used in the final implementation simply reads the file output by the tracking software and transmits this data to the master robot over USB. This is described in algorithm 5.

Algorithm 5: Matlab script for reading in positional data and transferring it to the master robot.

	Connect to the USB NXT;
	Start the master program;
	Wait to receive an acknowledgement;
	while true do
		Read a line in from the positions text file;
		Send the line to the master robot;
		Pause for 0.5 seconds;

The original master program was functional but some extra features have also been added to stream- line the system. The master program can automatically distribute and initiate a new version of the slave program to the slave robots. This saves plugging in each of the slaves separately to update them. Algo- rithm 6 shows the pseudocode for the master program.

Algorithm 6: Connecting, distributing programs and transmitting positional data to slave robots

	foreach slave device do
		Find its Bluetooth id in the device table;
		Attempt to connect to that id;
		if successful then
			if slave program exists on the master file system then
				Delete slave program from the remote slave;
				Send the updated slave program;
			Start the slave program;
			Display error;
		Delete slave program from master;
	Send acknowledgement to PC;
	while true do
		Receive message over USB;
		foreach slave do
			Wait for Bluetooth to become free;
			Transmit message over Bluetooth;

The data receiving part of the slave program is very simple, it waits for the Bluetooth connection to become available and a message to appear in the mailbox. It then splits this message on commas to retrieve the positions of each of the robots. These are stored in an array which is globally available to the program so that the positions can be used to control flocking. To make the program universal for all slave robots, the name of the robot (0, 1 or 2) is used to index the array of robot positions and this is stored in variables x and y so that the position of the robot running the program can always be referenced. The NXC code for receiving messages is shown below.

		if (ReceiveMessage(0,true,msg) != STAT_MSG_EMPTY_MAILBOX){
			int bot = 0;
			int count = 0;
			string substring;
			for(int i = 0; i<StrLen(msg); i++){
				if(StrIndex(msg,i) == ’,’){
					substring = SubStr(msg,bot,i-bot);
					pos[count/2][count%2] = StrToNum(substring);
					bot = i+1;
			x = pos[thisBot][0];
			y = pos[thisBot][1];


In order to ensure that the positional data is being received by all of the robots, an experiment similar to the preliminary timing experiment was prepared. This used all four NXT bricks to test the timing and possible message rates of this configuration. A Matlab script transfers a message containing positional data to the master NXT via USB. The master NXT then transmits the incoming message to the three slave robots. This experiment is run for a minute for each different pause time and repeated five times.

Discussion of Results

Table 4 shows the average results for different pauses between sending messages from the master NXT. Up to 400ms, there are no message losses. However, times less than 500ms cause a delay on sending so that when the Matlab script is stopped at the end of the experiment, messages continue to send. This means that the positional data would not be in real time, it would have a delay of around a second. The pause times here are greater than in the preliminary experiment as the master robot has to transmit three messages each time it receives one message from the Matlab script. Taking this in to account, the time per message has been calculated. At 500ms between incoming messages, we can see that the time per message sent is 162ms. This is much greater than the 32ms found in the preliminary experiment. The messages in each experiment are the same size and so we can assume that they take the same time to transmit. Therefore, the time it takes to switch between sending messages to different robots is about 130ms.

Table 4 – Averaged results for different pauses between sending messages from the master NXT
Time between messages(ms) Messages Sent Received Master Received Slave Lost Messages Time Per Message (ms)
600 99 99 99 0 202.02
550 106 106 106 0 188.68
500 123 123 123 0 162.60
450 132 132 132 0 151.52
400 148 116 116 32 135.14

Although these times are much greater than those in the preliminary experiment, they are still within reasonable limits. A pause of 500ms means that each robot will receive new positional data twice a second. This may mean that the robots have to drive more slowly than initially intended, but this rate ought to be fast enough to allow the robots to respond in good time. The message sending rate will not be set in the Matlab script as shown here. Instead, the camera tracking program will be limited to two frames per second. This will mean that positional data is only written to a text file every 500ms and the Matlab script will wait until there is a new line in the file.

Avoiding Collisions

This section describes the issues surrounding collision avoidance, some possible solutions and further investigations in to some of these. The chosen methods are presented and evaluated.

Description of the problem

For the robots to be able to flock successfully, they will have to avoid colliding with each other and objects in the environment. Again, there are a number of solutions to this problem; the best will depend somewhat on the techniques used to solve the problems in other sections. As with most of the issues discussed here, the best results are likely to come from more complex solutions, some of which will be prohibitively expensive or complex.

Potential Solutions

Some of the possible solutions are discussed here along with an analysis of their suitability for this project. The solution will be chosen based on complexity and compatibility with the hardware available.

Bump Sensors

The simplest solution is to not avoid collisions at all but to rely on them to trigger bump sensors on the robots. If a bump is detected, the robot would have to stop immediately and change its direction. This can give quite effective results, but is not ideal in terms of flocking as it is not seen in nature. This technique would not work with dead reckoning positioning since the robots would lose their position accuracy. In severe cases, the robots could collide causing a change in direction and a complete loss of knowledge of position. This would be an unrecoverable situation.


The NXT comes with an ultrasonic distance measuring device. This sensor sends an ultrasonic ping and measures the time it takes to receive the reflected sound wave. This time is directly proportional to the distance of the object reflecting the ping. Using the measurement of the speed of sound, the sensors calculate the exactly distance to the object. According to the Lego Mindstorms product site, the ultrasonic sensor is capable of measuring:

“distances from 0 to 255 centimeters with a precision of +/- 3cm.”

However, (Brown) suggests that the sensors may not be quite this accurate. Brown’s experiments show that the sensor has a precision closer to +/- 10cm and a minimum measurable distance of around 5cm. Using the ultrasonic sensor will allow the robot to detect objects in front of it. If the distance returned by the sensor drops below a threshold value, then the robot can be made to change its direction. Using multiple distance sensors could be beneficial. This would allow the robot to always move away from objects in front of it, using the sensors like eyes. It could also give a more natural movement, where the severity of the turn is proportional to the distance to the detected object.

Positional Data

Some of the robot locating techniques require that each robot knows its absolute position and the absolute positions of other robots nearby. If this is the case, then the robots need not have extra sensors for avoiding collisions with each other. The Euclidean distances between robots can be calculated using positional data and from this, the heading can be determined. Using this method allows for a robot to steer to avoid collisions more severely depending on the likelihood of the collision. This method of collision avoidance may be the most promising in terms of realism; movements can be smoothed and prioritised. To avoid collisions with the environment, objects could be mapped out and their positions programmed on to the robots. This method, however, does not give any provisions for avoiding collisions with arbitrary objects in the environment. This could be achieved using the bump sensors or ultrasound sensors.

Combining Multiple Techniques

Ultimately, the most promising option will be to combine solutions, using the positional data that the robot has in combination with the ultrasonic sensors to give accurate and smooth collision avoidance. The positional data will be used to ensure that the robots do not come close enough to each other to have a collision, but if the ultrasonic sensors detect a collision is imminent, then these will take priority over the motors and steer the robot to avoid the collision. Some initial experiments will be performed on the ultrasonic sensors to determine their accuracy and suitability for the task.

Preliminary Experiments

In order to verify the claims about the ultrasonic sensors, an experiment was set up to measure the accuracy of them. This involved setting the robot on a bench and recording the distance between it and a large object as reported by the ultrasonic sensor and using a tape measure. Figure 10 shows the results of this experiment.

Figure 10: A plot of the distance measured by an ultrasonic sensor against the actual distance. The linearity shows that the sensor is very accurate, always within 1cm. The shortest measurable distance is 7cm

Figure 10: A plot of the distance measured by an ultrasonic sensor against the actual distance. The linearity shows that the sensor is very accurate, always within 1cm. The shortest measurable distance is 7cm

According to these measurements, it would appear that the shortest distance that the sensors can measure is about 7 cm. The average relative accuracy was calculated at just over 6%. These results are all within an acceptable range for use in the way required in this project. The actual distance is not important, this will only be used as a means of adjusting the rate of turn that the robots perform when avoiding obstacles. Of more interest, is the angle of `view’ of the sensors. This is important because it will affect greatly the way that the sensors are used. A very narrow angle of view will only give correctly measured distances when an object is directly in front and perpendicular to the sensor. This will mean that the robot may not be able to detect collisions from the sides. A very wide angle of view will mean that the sensors may be triggered too frequently causing the robots to try and avoid collisions that are not in front of them and are not likely.

Another experiment was set up to measure the angle of detection of the sensors. This involved moving an object off of the centre line of the sensor and recording the distances returned by it. A large box was placed initially directly in front of the sensor, 20cm away from it. It was then moved to the left and right 2cm at a time. Measurement readings were taken from the sensor. Figure 11 shows this set up from above and figure 12 shows the results. x cm 20 cm degrees x cm

Figure 11: Configuration of ultrasonic sensor tests where objects are moved x centimetres away from the centre line of the sensor, 20 centimetres away.

Figure 11: Configuration of ultrasonic sensor tests where objects are moved x centimetres away from the centre line of the sensor, 20 centimetres away.

Figure 12: Ability to detect objects as they move away from directly in front of the sensor. Objects are no longer seen after 10cm away on the right and 15cm on the left.

Figure 12: Ability to detect objects as they move away from directly in front of the sensor. Objects are no longer seen after 10cm away on the right and 15cm on the left.

It is evident from this experiment that the accuracy of the sensor drops considerably as soon as the object is moved from directly in front of it. Although the sensor should always be measuring 20cm, it never actually does and is significantly wrong in most cases. As discussed above, this is not an issue since the actual distance is not important, just that the sensor recognises that an object is close. To infer a viewing angle from this, we must look at the sudden increase in measured distance just after the 10cm mark. This indicates that as soon as an object which is 20cm away from the sensor moves 10cm away from directly in front, then the distance measured becomes very large. This shows that the sensor can no longer `see’ the object. Equation 2 is used to calculate the angle at which the sensor can detect objects.

opposite x = 2 × arctan = 2 × arctan (2) adjacent 2

The detection angle calculated where x is 15cm is 53 degrees. Two ultrasonic sensors will be located on the front of the robot. In order to cover the area directly in front of the robot and slightly to the sides, the sensors ought to be angled around 45 degrees away from each other.

A program was written to use the ultrasonic sensors to avoid collisions. This simple program calcu- lated the difference between the measured values on the sensors and used this to change the speed and differential values of the motors. By using the difference, priority to turn away is given to closer objects. To test this program, the robot was placed in the centre of a room with only a single door to exit by. The robot should drive around the room and not bump in to any obstacles, finally it ought to make it through the door. Upon running the program, the robot did appear to perform very well. It would drive towards a wall, detect it first on one sensor and begin to turn away from that sensor. The turning movement was very smooth and natural; the speed of the turn is inversely proportional to the distance to the object. The robot would drive up to a wall and turn away from it but then continue driving along its length, much as a bird might fly along a wall. Eventually, the robot would happen upon the doorway at the correct angle and drive through it.

Some small problems arose from this experiment. Occasionally, the robot would get stuck in a corner and stop moving all together about 10 cm away from the walls. This is because the motors are driven according to the difference between the sensors. In a corner, this difference is zero and so the motors do not move at all. There are a number of solutions to this; the zero case could be dealt with separately and cause the robot to drive backwards, or the motors could be driven not by the difference, but simply by the value on each sensor. Another issue involved the sensors not detecting small or narrow objects such as chair legs. This is to be expected since very small surface areas will not reflect many sound waves and so will not be detected. This is not a serious issue since, for the purposes of the demo, the environment that the robots are in will not contain these types of obstacles.

Using Positional Data In order to test the effectiveness of using the robot’s positional data to avoid collisions, the program used to receive positional data from the master robot was modified. Once a particular slave robot had received the positions of all of the other robots, it calculated the distance to them. If this distance was less than 100 pixels, then the program would stop the robot from moving.

Initially, this did not seem to function correctly; the robots would not stop until they had bumped in to each other. Increasing the distance to 200 pixels solved this problem, but did not appear logical. By watching the camera image it became clear that there was a delay between the robots getting too close and them stopping. This indicated that the positional data was not being updated fast enough. To attempt to alleviate this issue, the Matlab script was modified to run without any pauses, however this just caused data losses since the last set of data had not finished transmitting before the next set arrived. The solution was to reduce the rate at which the camera tracking program wrote to the positions file. This meant that the USB and bluetooth connections were not being flooded by messages that they could not send. This kept the positions on the robots in real time and prevented them from colliding. Figure 13 shows the positions of the robots that were recorded using the camera. It can be seen that the robots begin to move towards one another, and at a distance of 104 pixels apart have stopped. This is the correct result and is repeatable.

Figure 13: Recorded positions of two robots as they initially drive towards one another then stop when they become too close.

Figure 13: Recorded positions of two robots as they initially drive towards one another then stop when they become too close.

Refined Solutions

As discussed above, the best solution here is to combine the features of the ultrasonic sensors and the positional data to prevent the robots from colliding with one another. This gives the most realistic motion most of the time, but stops robots immediately if a collision is imminent. By using two ultrasonic sensors on each robot, it is possible to determine which direction the obstacle is in and to steer around it. The final solution used combines both of the programs used in the preliminary experiments with some small changes and optimisations. Firstly, to deal with the getting stuck issue, the robot will now reverse if both of the sensors are giving a reading below the threshold value. This means that the robot will back away from any oncoming obstacles, such as another robot. Although not very natural, this is necessary due to the confined area in which the robots have to move. There may be situations where robots cannot drive around one another and do have to reverse. The NXC code for the avoiding algorithm is shown below as algorithm 7.

Algorithm 7: Connecting, distributing programs and transmitting positional data to slave robots

	while true do
		right = right sensor value;
		left = left sensor value;
		if left < THRESHOLD then
			rightSpeed = left*4-100;
			leftClose = true;
		if right < THRESHOLD then
			leftSpeed = right*4-100;
			if leftClose == true then
				speed = -50;
				speed = 50;
		if speed != 0 then
			Drive motors in synchronisation with one another at speed, speed;
			and difference of leftSpeed - rightSpeed;

The positional data was used in the same way as before, but instead of stopping the robots, the robots were directed to drive away from each other if they got too close. This was done by calculating the direction vector between the two robots and having them drive in the opposite direction to this vector. (More on this in the Flocking section).


To test the combination of these programs, two robots were set up beneath the camera facing each other. The robots should move towards each other until they are within 100 pixels of each other. At this point, they should begin to turn away from each other and travel in the opposite direction. If they become located within 10 centimetres of each other then they should turn away from the sensor with the closest reading. When this program is run, the robots move in the way described and the separate processes can be seen as their proximity increases. They begin to turn slowly when within 100 pixels of each other, although this does not prevent the ultrasonic sensors from detecting the other robot. At this point they turn more quickly until the sensors are clear again. This is not, however, a very robust process. Often the robots will not start their initial turn soon enough and by the time they detect each other, they are too close and collide. This problem was reduced slightly by increasing the distance at which the robots try to turn away from each other. Reducing the positional data update rate also helped, since previously some of the messages may not have been received. Increasing the rate of turn also helped avoid collisions.

Once these small adjustments were made, the program was much more successful. Even with three robots in the area, they would not collide with each other. There were still, however, some small issues. The way that the robots move is not particularly natural, especially as a simulation of birds. They move more like a herd than a flock. They move towards each other well, but as soon as they are too close, turn 180 degrees and move in the opposite direction. This gives a star like motion to the group, all moving in and out at the same time.

Discussion of Results

Although the movements do not appear very smooth currently, the solution described here is successful in its main goal of avoiding collisions. This is the most important part of this stage and some realism can be sacrificed in favour of reliability and robustness. This realism can be improved, however. By combining the methods used here with the flocking techniques, the movements can be smoothed significantly and made to appear more realistic overall. This will involve averaging the vector which is used to direct the robots away from each other with the robot’s current heading. By weighting these differently, it should be possible to get a movement which will allow the robots to travel side by side. It will still likely be necessary to use the ultrasonic sensors as an emergency collision avoidance system.


Once all of the positional data has been acquired along with measurements to avoid collisions, the robots can use this to calculate their positions in the flock and the movements that they need to make to con- tinue flocking. Before implementing flocking, it must be shown that the robots can successfully swarm together. This is very similar to flocking but without a tendency to travel in the same direction as the rest of the swarm.

Description of the problem

This aspect requires the combination of all other techniques. There are rules which control the behaviour of an individual in the flock. Each rule will require that the robot move in a particular direction. For example, the cohesion rule will require that the robot moves towards the centre of the flock. Some of these rules are more important than others. The following is a prioritised list of rules.

  1. Avoid collisions with environment
  2. Avoid collisions with other robots
  3. Keep separated from other robots
  4. Steer towards the average mass of the flock
  5. Steer towards the average heading of the flock
  6. Steer towards any goals

Steering towards the average heading of the flock is a requirement which will keep the flock moving in the direction is it going. Without this rule, the flock acts more as a swarm. For this project, due to the way that the headings of each robot are calculated (i.e. locally), it is not possible to determine the average heading of the flock. This means that the flock may not perform quite as well as if this was possible, but it can be somewhat accounted for by using the current heading of the individual in question. This way, the robots will not turn sharply and should move in large curves as birds might.


Swarming, the initial stage of flocking, will be implemented first. It is a form of flocking which does not require that the individuals in the swarm know the headings of the others in the swarm. This gives a motion with no apparent direction, but does still keep the group together tightly and allows them to move around avoiding predators or obstacles. Swarms work well when a task is assigned to the robots. For example, this might be to collect food, or perhaps seek heat. This effectively replaces the desire to move in the same direction of the other members of the swarm with the task. This therefore causes the swarm to move in the direction with the most goal achieving potential e.g. the brightest area. Entities in a swarm will be seen to oscillate around the centre of the swarm.

Potential Solutions

Subsumption Architecture

Brown describes a subsumption architecture to provide layered control of mobile robots. The idea here is to keep each aspect of the robot control separate and effectively modular. All layers run simultaneously in order to keep the robot functioning. Higher layers, however, are allowed to interrupt the process of lower priority layers. This type of parallel processing allows all layers to access sensor data and act upon it. For example, in the case of a collision, the collision detection layer would halt all layers below it and take action by taking direct control over the motors. A system similar to this will be used in this project. There will be multiple threads running on the robot each handling a different aspect of the robot’s movements.

Vectorised Control

One way to consider flocking is as a series of vectors which influence the direction of an individual in the flock. The vectors have their origin at the centre of the robot. There will always be a vector pointing towards the average mass of the flock. This is simply calculated by taking the average of the other two robot’s positions. There will also be a vector which is in the direction of the heading of the robot. This will be used to smooth the movements of the robot. Additionally, there will be a vector for any robot which is too close, in the opposite direction to that robot. These vectors will be normalised and then weighted according to set values and then averaged. This will give the final direction vector that the robot should travel in. The heading is then used to calculate the direction which the robot must turn. This comes down to the problem of finding the angle between two vectors. This would usually be a simple task involving the arctangent of the difference of the vectors. The NXC programming language, however, does not have an arctangent function. This complicates matters slightly, although it would be possible to use a power series or approximation to the arctangent instead, this would be an overly complex approach. Instead, the vector dot and cross products can be used to calculate the angle between the vectors.

|a × b| = |a||b|sin (3)
a · b = |a||b|cos (4)

Rearranging gives:

|a × b| = arcsin 90 90 (5)
|a||b| a · b = arccos 0 180 (6) |a||b|

In equations 5 and 6 the vector a is taken to be the heading of the robot, and the vector b is taken as the direction of movement that the robot needs to take. Both the cross product and the dot product are required to cover all 4 quarters that vector b can occupy. The cross product gives a signed value of the angle, which can be used to determine which direction to turn. The dot product returns between 0 and 180 this can be used to determine if the movement vector is behind the robot. The robot is then made to turn at a rate proportional to the value of the cross product. This means that if the robot is heading in completely the wrong direction, it will adjust its heading very quickly, and as it comes to face the correct direction, it will reduce its rate of turn down to 0 when it is travelling in the correct direction. The idea is very similar to a proportional-integral-derivative (PID) controller. This should give a very smooth turning movement and appear very natural

Using tasks for multi threading

Now that each of the separate technical challenges have been solved, they can be combined to provide all of the data and functionality required to simulate flocking. The Lego NXT supports multi threaded processing with up to 256 concurrent threads (Hansen 2007), or `tasks’ in NXC. Each aspect of the program will run in a separate task, improving speed and modularity. The pieces of code described previously are effectively put in to a task each and they all run continuously. The tasks are as follows.

  1. receive – for collecting the positional data and storing it in an array
  2. reckon – for keeping track of the robots orientation using dead reckoning
  3. flock – for controlling the flocking rules and driving the motors
  4. avoid – for the ‘emergency’ ultrasonic collision avoidance
  5. square – used to keep the robot within the camera’s visible region

The receive task starts all of the other tasks once it has begun receiving positional data. This prevents the robots from running away blindly as soon as the program starts. The NXT automatically schedules processor time for each of the tasks, therefore, they have to be considered to be running simultaneously. This means that special considerations have to be taken into account with accessing data and other shared resources such as motors. The structuring of the program is such that only one thread is allowed to write to the positional data array. If this array is being read at the same time as being written, there are no serious consequences since the positions will only be a few pixels out and will be updated in a short period of time. The biggest issue is over motor control. Both the flock thread and the avoid thread are allowed to drive the motors. If they both attempt to do it at the same time, there could be undesired results. This requires a mutex variable to protect the motors such that they can only be used by one thread at a time.

Tracking and Heading data

As previously discussed, it is necessary for each robot to keep track of its orientation at all times. This is done using dead reckoning and the optical encoders built in to the robot’s servo motors. The reckon task must constantly update a global heading variable and this must be very accurate for the robots to be able to calculate the direction in which they need to turn. Therefore, it must be known that the robot can drive around for a period of time performing different turns at different speeds and still keep track of its heading. This involves running a program which drives the robot in a square, using the heading information to determine when it has turned through 90 degrees. By using the camera to track the location of the robot it can be seen how accurate the heading information is.

a. First test, angles too great b. Refined to make 90 degree turns

3 red squares showing the position of the robot. The squares are supposed to overlap, but do no, they diverge

a. First test, angles too great

These squares are slightly closer together

b Refined to make 90 degree turns

Figure 14 shows the result of tracking the robot as it drives in squares. It can be seen that the squares move up and to the right. In Fig 14 a. the squares also rotate clockwise. This is due to the robot tuning more than 90 degrees on each corner. This was rectified in b. by adjusting the way that the heading is calculated. The problem with the squares drifting, however, could not be resolved. Measuring the lengths of the sides of the squares shows that the bottom and right sides are longer than the other two. This is what causes the robot to drift. By running the robots in opposite directions on the floor it would appear that either the floor is not level or the direction of the carpet pile causes a difference in the number of rotations measured. This should not pose serious issues with flocking since the robots do not need to measure distances, only angles and these are accurate.


To ensure that the robots will begin to steer in the direction that they are required to by the flocking algorithm, four tests must be performed. The final direction vector from the robot will be considered dif- ferently depending on four different circumstances. Due to the results of the arcsin and arccos formulas, the four quadrants must be considered; figure 15 shows how the values of arcsin and arccos can be used to determine which side the vector is on and whether it is behind the robot.

Figure 15: Results of the cross product and dot product when direction vector points though different quadrants. The values can be used to determine which quadrant the vector is in and so the direction the robot needs to turn.

Figure 15: Results of the cross product and dot product when direction vector points though different quadrants. The values can be used to determine which quadrant the vector is in and so the direction the robot needs to turn.

To test this, the three robots were used and placed within the area visible by the camera. Two of the robots were placed at the centre of the area and were switched off. This allowed them to be tracked by the camera but remain stationary. The third robot was then placed in each of the corners of the area always facing in the same direction. This robot should calculate the average positions of the two other robots and drive towards them. By performing the test in each corner, the different quadrant tests will be performed.

Discussion of Results

The positions of the robots were tracked and these have been overlaid onto a single image (figure 16) to save space. The red points show the robot being tested and the blue and green points show the stationary robots. It can be seen that the tested robot always performs the correct action and moves towards the others.

Figure 16: Testing heading adjustment with vectors in each quarter of of the robot.

Figure 16: Testing heading adjustment with vectors in each quarter of of the robot.

Results and Analysis

The individual experiments and discussions for each of the separate parts of this project comprehensively analyse and evaluate the technologies, techniques and algorithms used to solve the problems at hand. Ultimately, however, the real results come from the combination of all of these techniques and the overall effect that is achieved. It is difficult to objectively test to see if a ‘flock` is actually performing flocking, but by keeping a record of the positions that the robots moved though, it is possible to analyse their paths and inspect the robots movements with respect to one another. To the robots, all distances and coordinates are in pixels, so pixels will be the units of distance used in this discussion. The real distance that are represented would vary on the camera and it’s positioning. In the current configuration, 100 pixels equates to about 50cm.

Individual Movements of robots

The movements of a single robot are easily shown, they are quite smooth and natural. The robots never move backwards and hardly ever come to a complete stop. When making turns they turn smoothly and slowly to follow their target when it is in front of them. In cases where the target is behind the robot they turn quickly initially and slow down as they begin to face their tar- get. The tracked positions of one robot are shown in figure 17. The very jagged parts of the path are due to the camera tracking

A wiggly line drawn

Figure 17: The path taken by a robot as it move to the centre of the group and avoids collisions

Collision Avoidance

The collision avoidance algorithms generally work well, the robots can be seen to start to turn away from each other when they are within 100 pixels, however usually will then get close enough for the ultrasonic sensors to take over and steer the robots more away quickly. The robots do not always avoid collisions completely and occasionally bump into one another. This does not normally pose a problem because they have usually already began to turn away and continue to do so. Because the motors are running on very low speed and power, the wheels do not slip and heading information is not lost. Collisions could likely be reduced further by increasing the distance at which the robots begin to turn away, however due to the small area available, this is not practical for the demonstration.

Group Movements

To show that the robots are not moving randomly and are in fact moving to stay close to one another, we can find the distance of each robot from the centre of the group. By taking a mean of these distances, we can show that the group is becoming smaller over time. Initially, the robots were set up so that they only had three rules to follow:

  1. Avoid collisions
  2. Keep separated from other robots by at least 100 pixels
  3. Steer towards the average mass of the flock

This would constitute as swarming and the robots would be expected to initially move towards one another until they are too close then turn away from each other. This process should repeat and there should be no collisions between the robots. The paths of one of the tests is shown in figure 18 along with a plot of the mean distance between the robots over time in figure 19.

3 different lines drawn showing robot positions

Figure 18: Positions of robots as they swarm each colour represents a different robot.

Figure 19: Mean distance between members of the swarm and the centre of the group.

Figure 19: Mean distance between members of the swarm and the centre of the group.

The mean distance here is seen to oscillate over time as the robots moved too close then further away from each other. This reflects the motion of the group which contracted and expanded over time. The variation in the average has settled to around 40 pixels. The lowest value of mean distance is 30 pixels (after 50 updates). For this value to be possible, all three robots were closer than 100 pixels at this point in time. The mean distance is then seen to rise sharply after that point. This is showing the point at which the ultrasonic sensors take over and direct the robots away from one another. After this, the robots never get closer than 40 pixels to the mean. The vectorised rule manages to turn them away from each other to maintain the distance of 100 pixels.

The next step added a rule which would give the robots a tendency to continue on the path that they were currently on. This is an attempt 100 pixels 100 pixels at approximating the rule in flocking which would require that mem- 74 pix els bers of the flock try to align with the average heading of the flock. This 74 pixels should cause the robots to remain driving in one direction unless over- 74 pixels come by a larger vector in another direction. Using these rules, the 100 pixels robots should move towards one another as in swarming and stay close together. This will be shown by the average distance decreasing and

If flocking perfectly, the distance from the centre of the robots flocking. flock would be fixed at 74 pixels and the robots would be in a triangle formation with the mean position at its centre in accordance with figure 20 (from the definition of an equilateral triangle).

Figure 20: Perfect formation of robots flocking.

Figure 20: Perfect formation of robots flocking.

Figure 21 shows the average distance from the centre over time.

Figure 21: Mean distance between members of the flock and the centre of the group.

Figure 21: Mean distance between members of the flock and the centre of the group.

Figure 21: Mean distance between members of the flock and the centre of the group.

The minimum distance from the centre of the flock again appears at around 30 pixels. The flock then separates more quickly than in the swarm. The mean distance from the centre of the flock then quickly converges on around 50 pixels. This is a very good indication that the robots are holding their positions relative to one another and moving together in the same direction as a flock of birds would. This is a little way off of the idea 74 pixels but this likely indicates that the attraction rules are stronger than the avoid rules.

It is also helpful to consider the movement of the group as a whole. It would be expected that a swarm would have no general direction and would stay centred around one area whereas a flock might move in a particular direction. Figure 22 shows a plot of the average position of the group over time. The red line is the swarm whereas the blue line represents the flock.

Figure 22: Average position of the flock in blue and swarm in red.

Figure 22: Average position of the flock in blue and swarm in red.

From this we can see that the flock does indeed appear to move in a defined direction whereas the average position of the swarm moves in a circle. Calculating the standard deviation of the mean positions in each of x and y can give an idea of how far the robots were away from each other on average. For the swarming algorithm, we have a standard deviation of (18,25) and for the flocking, (39,55). This indicates that the majority of the time, the robots were further apart when flocking than when swarming. This should be expected because when flocking, the robots should have less of a tendency to move towards one another due to the additional rule of keeping on their current heading. This means that they may stay further apart, but are in a more stable position as indicated by the graph.

The individual components of this project have all returned successful results. Also, from the eval- uations, calculations and results here it would certainly appear that the robots are performing what is expected of both a swarming and flocking algorithm. Visually, the robots do appear to be flocking, but perhaps more as a flock of sheep than birds. Their movements are a little slow and jerky, although these may be improved upon with some further alterations. The appearance is impressive especially when considering that the robots are moving only under four simple rules and have no coordination between them.

Discussion and Outcomes

Use of the Lego NXT platform

For this project, the use of Lego Mindstorms NXT is by no means perfect, a lack of advanced sensors and unusual Bluetooth protocols are its main shortcomings. However, despite these issues, the robots have performed well and for the most part have been quite capable. The processing power is more than enough to handle the programs used here. It utilises multi threading well and has a good library of string and trigonometric functions to perform simple calculations with ease. The servo motors and their rotary encoders are impressively accurate and are quite suitable for simple dead reckoning tasks. The NXC programming language is powerful yet simple to use and can perform much more than is used in these programs. The Bluetooth does have serious issues regarding connections to computers, but does appear to be fast and reliable when used between NXT bricks. Using USB as is used in this project is the only real solution to transmit data from a PC to the NXTs remotely. This method is operating system independent, although does require the use of Matlab.

Processing and Matlab for image processing and communication

Processing has proved a useful tool for image processing and rapid program prototyping. Its Java like syntax has been easy to learn and an extensive library of image processing functions allows for simple programs to be written quickly and easily. The large amount of community support for this environment meant that finding a library to process images from a camera was easy. The library is also written with an interface that makes it much easier to capture and manipulate the images than using Java, C++ or Matlab. Writing the data to a file was also very simple and example programs are only a few lines long. This lead to a program which was easy to understand and debug. The use of Matlab was also very successful, again, it is an excellent environment for rapid prototyp- ing and its libraries for file reading made for simple programming. The biggest aids were the RWTH library and Daniele Benedettelli’s work on the NXT and Bluetooth which greatly simplified the process of communicating with the robots.

Bluetooth as a high speed mobile network

Despite the initial issues with bluetooth and the PC, the Bluetooth connection appears very fast and reliable. There were concerns about interference from other devices including WiFi causing dropped connections and poor speeds. In reality, even with three simultaneous connections, they were never dropped unexpectedly or intermittent throughout the development and testing stages of the project. Once the robots began to move around, this posed no problems either. The connections remained just as reliable and fast as before. This indicates that, for short distances at least, Bluetooth with the Lego NXTs is a very reliable method for one way communication. There may be issues if transferring large amounts of data over the connections and it is likely that they would become less reliable over larger distances. However, when used in compliance with the protocols, the connections work very well.

Flocking as a means to control multiple autonomous vehicles

Overall, the results of this project are not only visually impressive, but provide a useful insight into the practicalities of implementing flocking with a group of robots. It has been a great challenge to achieve the desired results, but the independently controlled robots provide an excellent platform for further developments. Adding goal seeking to the robots could become very useful in a real world application. A similar system could be used on the surface of water to clean up the recent oil spill in America. The most difficult part of the system is finding a way to locate other robots without either a third party or without having to communicate. If a simple, reliable solution was found to this then robot swarms and flocks could become very powerful tools in many different environments.


The challenge to create a group of Lego NXT robots that can flock together was an interesting one requiring the use and combination of many different technologies. The selection of development envi- ronments has worked well and a fairly robust system has been developed. It would be interesting to try to implement a similar system but with more slave robots. This would be reasonably easy to accomplish with another set of master – slave robots. This would demonstrate the flocking effect better than using only three robots. Using the system over a larger area would also be interesting. With enough robots and a large enough area, it would become viable to use GPS to locate the robots instead of a camera. This would then free up the robots to act completely independently with no extra hardware. A gyro or compass sensor could be used on the robots instead of the dead reckoning system to keep track of their headings. This would make for a very robust system which would operate across a variety of terrains. Another interesting challenge would be to give robots different skills and design a method to manage the use of these skills with no central control.

Ultimately, this project has met the challenges set out in the beginning and has managed to implement a flocking algorithm using simple components and relatively cheap robotics platform. It is not ideal, and improvements could be made. Removing the camera system would be the biggest improvement, although this seems to be nearly impossible with the available hardware. By refining the system slightly, it could make a great demonstration of the practical application of flocking using multiple inexpensive robots.


Anders. C++ communication library, April 2009. URL http://www.norgesgade14.dk/bluetoothlibrary.php.

K. Ray A.K. Shaneyfelt T. Kota S. Behera L. Jamshidi M Benavidez, P. Nagothu. Multi-domain robotic swarm communication system. System of Systems Engineering, 2008. SoSE ’08. IEEE International Conference, 1:1­6, 2008. doi: DOI: 10.1109/SYSOSE.2008.4724189. URL http://ace.utsa.edu/publications/auv3.pdf.

Daniele Benedettelli. Matlab bluetooth router, January 2009. URL http://robotics.benedettelli.com/BT_router.htm.

Jens Blauert. Spatial Hearing – The Psychophysics of Human Sound Localization. Table 2.1, page 39. The MIT Press, 1997. John G. Blitch. Artificial intelligence technologies for robot assisted urban search and res- cue. Expert Systems with Applications, 11(2):109 ­ 124, 1996. ISSN 0957-4174. doi: DOI: 10.1016/0957-4174(96)00038-3. URL http://www.sciencedirect.com/science/article/B6V03-3WP2C28-3/2/7cefc75e8d6631aefe13ea81adcb7611.

Army Applications of Artificial Intelligence. Misel Brezak, Ivan Petrovic, and Edouard Ivanjko. Robust and accurate global vision system for real time tracking of multiple mobile robots. Robotics and Autonomous Systems, 56(3):213 ­ 230, 2008. ISSN 0921-8890. doi: DOI: 10.1016/j.robot.2007.08.002. URL http://www.sciencedirect.com/science/article/B6V16-4PGPVV0-1/2/94a89f2a1102a59ecef7c4fecc861fba

Mike Brown. A subsumption architecture for control of the lego mindstorm nxt robot. URL http://www.dcs.shef.ac.uk/intranet/teaching/projects/archive/ug2008/pdf/aca05mb.pdf.

C. Cutts and J. Speakman. Energy savings in formation flight of pink-footed geese. Journal of Experi- mental Biology, 189(1):251, 1994. John R.G. Dyer, Christos C. Ioannou, Lesley J. Morrell, Darren P. Croft, Iain D. Couzin, Dean A. Waters, and Jens Krause. Consensus decision making in human crowds. Animal Be- haviour, 75(2):461 ­ 470, 2008. ISSN 0003-3472. doi: DOI: 10.1016/j.anbehav.2007.05.010. URL http://www.sciencedirect.com/science/article/B6W9W-4PXNHP4-2/2lash72ce9dd9f134a8a9a0e1887e77eeeb7a.

Ben Fry and Casey Reas. Processing, an open source programming language. URL http://processing.org/.

J. Goldstein. Emergence as a construct: History and issues. Emergence, 1(1):49­72, 1999. URL http://www.wu.ac.at/am/Download/ae/Issue_1-1.pdf#page=50.

John Hansen. Not exactly c programmer’s guide, October 2007. URL http://bricxcc.sourceforge.net/nbc/nxcdoc/NXC_Guide.pdf.

Geoff Johnson. How to make a webcam work in infra red. URL http://www.hoagieshouse.com/IR/.

Nikolaos M.P. Kakalis and Yiannis Ventikos. Robotic swarm concept for effi- cient oil spill confrontation. Journal of Hazardous Materials, 154(1-3):880 ­ 887, 2008. ISSN 0304-3894. doi: DOI: 10.1016/j.jhazmat.2007.10.112. URL http://www.sciencedirect.com/science/article/B6TGF-4R3C0D5-6/2lash62e02208d5b851cbfa2972eac69eedcb.

” S. Kernbach, E. Meister, F. Schlachter, K. Jebens, M. Szymanski, J. Liedke, D. Laneri, L. Winkler, T. Schmickl, R. Thenius, et al. Symbiotic robot organisms: Replicator and symbrion projects. Proc. Perform. Metrics Intell. Syst, 2008. Gregor Klancar, Matej Kristan, Stanislav Kovacic, and Omar Orqueda. Robust and efficient vi- sion system for group of cooperating mobile robots with application to soccer robots. ISA Trans- actions, 43(3):329 ­ 342, 2004. ISSN 0019-0578. doi: DOI: 10.1016/S0019-0578(07)60152- 9. URL http://www.sciencedirect.com/science/article/B6V3P-4MVP5TY-2/2/5586775447471b91d0f98ed900aa7ccd.

Johnny Chung Lee. Tracking your fingers with the wiimote. URL http://johnnylee.net/projects/wii/.

Lego. Lego mindstorms: What is nxt? URL http://mindstorms.lego.com/en-us/whatisnxt% slashdefault.aspx.

Lego. Lego education: Mindstorms rcx, 1998. URL http://www.lego.com/engslasheducation/mindstorms/home.asp?pagename=rcx.

Jacques Leslie. Free flight, 1996. URL http://www.wired.com/wired/archive/4.04/es.faa.html?pg=1&topic=&topic_set=.

Achim Knepper et al. Linus Atorf, Alexander Behrens. Rwth – mindstorms nxt toolbox for matlab, November 2009. URL http://www.mindstorms.rwth-aachen.de/.

G.W. Lucas. A tutorial and elementary trajectory model for the differential steering system of robot wheel actuators, 2001. URL http://rossum.sourceforge.net/papers/DiffSteer/DiffSteer.html.

The MathWorks. Matlab – a high-level language and interactive environment, 2010. URL http://www.mathworks.com/products/matlab/.

Sumantra Dutta Roy Michael Werman, Subhashis Banerjee and MaoLin Qiu. Robot localization using uncalibrated camera invariants, 1999. URL http://www.cs.huji.ac.il/labs/vision/papers/localization-cvpr99.pdf.

MOD. The grand challenge, 2008. URL http://www.mod.uk/DefenceInternet/DefenceNews/EquipmentAndLogistics/StellarTakesTopModPrizeForBattlefieldInnovators.htm.

R. Murphy, J. Casper, M. Micire, and J. Hyams. Mixed-initiative control of multiple heterogeneous robots for usar, 2000. URL citeseer.ist.psu.edu/murphy00mixedinitiative.html. John C. Murray, Harry Erwin, and Stefan Wermter. Robotics sound-source localization and tracking using interaural time difference and cross-correlation. In AI Workshop on NeuroBotics, 2004. URL http://www.his.sunderland.ac.uk/ps/murray_ulm04.pdf.

Josh Nimoy. Myron video capture and computer vision plugin, 2001. URL http://webcamxtra.sourceforge.net/index.shtml.

Nissan. Robot car “goes to school” on collision-free driving by mimicking fish behavior, 2009. URL http://www.nissan-global.com/EN/NEWS/2009/_STORY/09100101e.html.

William M.Spears Diana F.Spears Jerry Hamann Tom Kunkel Paul M.Maxim, Suranga Het- tiarachchi and Caleb Speiser. Trilateration localization for multi-robot teams. URL http://cs.eou.edu/CSMM/surangah/research/icinco.pdf.

Craig W. Reynolds. Flocks, herds, and schools: A distributed behavioral model, in computer graphics. In SIGGRAPH ’87 Conference Proceedings, volume 4, pages 25­34, 1987. Valentin Robu, Harry Halpin, and Hana Shepherd. Emergence of consensus and shared vocabular- ies in collaborative tagging systems. ACM Trans. Web, 3(4):1­34, 2009. ISSN 1559-1131. doi: http://doi.acm.org/10.1145/1594173.1594176.

David Sumpter. Collective animal behavior, 2008. URL http://www.collective-behavior.com/index.html.

SwarmSystems. Autonomous quadrotor mavs, 2008 . URL http://www.swarmsys.com/technology.html.

Sivan Toledo. Analysis of the nxt bluetooth-communication protocol, September 2006. URL http://www.tau.ac.il/ stoledo/lego/btperformance.html.

Emil Vassev, Mike Hinchey, and Joey Paquet. Towards an assl specification model for nasa swarm- based exploration missions. In SAC ’08: Proceedings of the 2008 ACM symposium on Applied computing, pages 1652­1657, New York, NY, USA, 2008. ACM. ISBN 978-1-59593-753-7. doi: http://doi.acm.org/10.1145/1363686.1364079.

Mark Yim, Kimon Roufas, David Duff, Ying Zhang, Craig Eldershaw, and Sam Homans. Modular reconfigurable robots in space applications. Auton. Robots, 14(2-3):225­237, 2003. ISSN 0929-5593. doi: http://dx.doi.org/10.1023/A:1022287820808.

3 thoughts on “Flocking With Lego Mindstorms NXT

  1. Pingback: Convert LaTeX tables into HTML | A Software Developer's Blog

  2. Pingback: Flocking behavior using Mindstorm robots

  3. Pingback: Flocking With Lego Mindstorms NXT | Ideas for Teaching Computer Technology to Kids

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>