Saturday 30 March 2013

ROBOTS IN MEDICAL SCIENCES

   

              ROBOTS IN MEDICAL SCIENCES


Few decades ago, people are aware of robots only through movies and books. Robots are used for the purpose of entertainment in various movies. In these days, robots play a major role in the field of medicine. Scientists are trying to find out new ways for making use of robots in their medical field. Robots can be helpful in the medical world by three ways which includes dealing with diagnosis, surgery and bringing back good health for the patients.
There seems to be high risk of difficulties in most of the surgeries and some times there are chances for mortality. Hence, most of the scientists and doctors made various researches in order to make the surgeries safe and secure. In such situations, robots can help a lot in making the surgery more safer since they could make smaller cuts in the organs or tissues. This would in turn make the patients feel easier and comfortable.
The most important point considered in the medical world is to get accurate and safer diagnosis. Most of the times, the patients are diagnosed in an inaccurate manner and hence they suffer from various problems. The test instruments in robots are able to perform various tests that can be performed by doctors or nurses. These tests include sample collection, CAT scans performance, etc. This would help in reducing the errors and also reduces the malpractices done in case of reports delivered.
Most of the people are injured through accidents. The quality of such patient’s life can be improved by rehabilitation. Robots would help in such process by helping the patients in restoring the function of their legs and hands. Robots can also help in monitoring the progress of each and every patient. Thus robots play a vital role in the field of medicine and it reduces the work of human.


Robots are critical to the medical field where extreme precision and delicacy is necessary, and the margin for error slim. In this section learn how robots are used to keep you healthy.
The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimises surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time.

Robots are performing their function in different fields of medical science, such as

SURGERY:


Because robots are able to perform major operations while only making small incisions, patients receive many benefits: lessened trauma, fewer infections, decreased healing time, and a faster discharge from the hospital. Robots are used to perform heart surgery without opening patients chests.

EDUCATION:


Robots are currently used to test medical students. Pregnant humanoid robots, for instance, prepare students for various birth complications.

ADMINISTRATION:


Robots are also affecting the way hospitals are run and medications distributed. They make sure hospital visits are shorter and the risk of infection minimized.


Possibly the most glamorous application of robots in medicine, current state of the art couples a human surgeon with mechanisms that can perform surgery through very small incisions, greatly reducing the risk to patients. The surgeon's ability to control the mechanism is enhanced by providing force feedback to the controls, allowing the operator to have a sense of touch to help control the robot. This type of robot isn't completely independent, and is more properly called a teleoperated device, but uses much of the same technology an independant robot would employ for motion control, imaging and tactile/force feedback. The fully autonomous surgical robot that is a feature of science fiction literature and screen entertainment is unlikely to appear in the near future, and even if technically possible, would be viewed with great skepticism by patients (and their lawyers).
The robots are capable of doing lots of works in hospitals and medicine, some of them are:-
    


Diagnosis

Robotic test instruments range from exotic scanners (such as computerized axial tomography: the CAT scan) to laboratory equipment that processes and analyzes samples of blood and other materials extracted from the body for diagnostic purposes. They provide consistency and accuracy, reducing the possibility of human error that can cause an inaccurate diagnosis. While not the classic industrial robot, they do employ many of the same automation techniques.     


Prosthetics

Mechanical replacements for missing limbs and organs that can interact with the human organic system are a long-standing goal of the medical community. Research into replacement hearts, limbs, eyes, ears and other organs offers hope for the development of effective implanted devices and replacement limbs that can function for long periods of time. Robotic devices can also provide assistance to people with severe restrictions on movement, in many cases allowing them at least some capability to move around or nearby their homes. 

One of the great challenges facing the designers of implantable devices is the need to avoid stimulating the normal immune system response to foreign objects, a response that can cause serious complications or disable the device. It is also necessary for the device to be able to survive in the biological environment without damaging chemical interactions with the body.



Rehabilitation 

Robots can provide exercise platforms to help restore limb function and can monitor the condition of patients undergoing rehabilitation from the effects of injuries, stroke or other brain or nerve damage.

Pharmaceuticals 


Industrial robots used to manufacture drugs provide consistency and cost control in drug production and can perform many process and handling steps without the risk of contamination from human operators or exposing humans to dangerous chemicals or inadvertant drug doses. 




June 24, 2008 The rise of robotic surgery has marked a new age in medical science and one of its pioneers has just reached a major milestone. Dr. W. Randolph Chitwood, Jr. has performed his 400th robotic-assisted mitral valve repair at Pitt County Memorial Hospital. A globally recognized cardiothoracic surgeon, Chitwood’s robotic-assisted surgery training center at the Brody School of Medicine at East Carolina University (ECU) was the first site in the US to offer formal training in robotic-assisted mitral valve (a dual flap valve in the heart located between the left atrium and left ventricle) repair procedures.


Now robots have been implemented in every aspects of our life and it can not be ignored that how they make our life more better and comfrtable.

"By integrating computer-enhanced technology with the surgeons’ technical skills, robotic-assisted procedures enable surgeons to perform better surgery in a manner never before experienced"
 So now a days they are  integrated part of our life.










Thursday 28 March 2013

A LINE FOLLOWER ROBOT

A LINE FOLLOWER ROBOT

The purpose of this document is to help you build a Line Following Robot.
Starting with an overview of the system the document would cover implementation
details like circuits and algorithms, followed by some suggestions on improving the
design.



























BACKGROUND:


The present condition in Industry is that they are using the crane system to carry the parcels from one place to another, including harbor’s .Some times the lifting of big weights may cause the breakage of lifting materials and will cause damage to the parcels too.The robot movement depends on the track. Use of this robot is to transport the materials from one place to another place in the industry.


Practical applications of a line follower :  Automated cars running on roads with embedded magnets ; guidance system for industrial robots moving on shop floor etc.

Prerequisites:
Knowledge of basic digital and analog electronics.
(A course on Digital Design and Electronic Devices & Circuits would be helpful), C Programming


WORKING PRINCIPLE:

This simple robot is designed to be able to follow a black line on the ground without getting off the line too much. The robot has five sensors installed underneath the front part of the body, and two DC motors drive wheels moving forward. A circuit inside takes an input signal from five sensors and controls the speed of wheels’ rotation. The control is done in such a way that when a sensor senses a black line, the motor slows down . Then the difference of rotation speed makes it possible to make turns. For instance, in the figure on the right, if the sensor somehow senses a black line, the wheel on that side slows down and the robot will make a right turn.

overview of robot











Now we will discuss all blocks in detail..... and i 'll tell you the complete procedure to make all blocks separately and how to assemble them.


  1. SENSOR ARRAY
   
 

We have used IR sensors for making sensor array. IR sensor consists of a transmitter and a receiver as shown in above photo. The transparent led is transmitter while the black one is receiver. IR transmitter will transmit infra-red radiation which will fall on the surface and the reflected radiation will be received by the receiver. The reflection of radiation will depends upon the colour of the surface.as shown in photo below.
the orientation of sensors should be as like as shown in above photo so that robot can detect a sharp as well as curve tuns.

Most of the radiations will be reflected back from white surface but it is just opposite in case of black surfaces as shown in diagram below.

working of ir sensors

circuit diagram of one pair of ir sensor






















This circuit diagram is showing only one ir sensor. we need to make 5 sensors on PCB board as shown in 1st photo of sensor array.












2. COMPARATOR

We used comparators(OP_AMPs) to convert the analog signals received from sensors in to digital signals
The op-amp IC we used is LM324 to give square wave as a output by comparing the sensor signal with a reference signal provided by us.



Theory :-
As you all know that in the world of electronics all the microcontrollers and microprocessors works on DIGITAL SIGNAL, but from the sources like battery we get a ANALOG SIGNAL. So in embedded systems it is mandatory to convert the analog signal into digital signal.
So for converting the analog signal into digital signal we use operational amplifiers(OP-AMP). We use operational amplifiers as a voltage comparator . A op-amp is shown in figure below :-




We fix a voltage at negative input with the help of variable resistor of 10k ohm and at the positive input we give our analog signal. If the analog signal is grater than the fix voltage at negative input then we get 1 in output(means +5V) and if the analog signal voltage is less than the voltage at negative input then we get 0 at output(means 0V).
Note :- Set the negative input voltage with the help of variable resistor according to your requirement.
you can see a A to D converter in below fig.




There are several OP-AMP ICs are available like :- lm358, 741, lm324 etc.Here we use LM324 which have 4 op-amps in it.


 The reference voltage will be given to inverting terminal(2,6,9,13) . The value of Vcc will be equal to 5v or 6v so that output of opamp shold be less than 5v only otherwise micro-controller will not read the signal from opamp.

Sensitivity of IR sensor:

The sensitivity of sensor means that how much effectively the sensor senses the change that is
occurring in its surrounding. The sensitivity of the IR sensor is controlled by reference voltage at pin 2
using variable resistor.

· Large value of reference voltage – less sensitive.
· Small value of reference voltage – more sensitive.


3. MICRO-CONTROLLER

I am using ATMEGA 328 with ARDUINO UNO development board. it is very easy to programme and burn the same in to ur arduino.
Microcontroller board used :- ARDUINO UNO

Technical specification:-

Microcontroller:-                                      ATmega328

Operating Voltage                                    5V

Input Voltage (recommended)                7-12V

Input Voltage (limits)                               6-20V

Digital I/O Pins                                        14 (of which 6 provide PWM output)

Analog Input Pins                                     6DC

 Current per I/O Pin                           40 mA

DC Current for 3.3V Pin                         50 mA

Flash Memory                                          32 KB of which 0.5 KB used by bootloader

SRAM                                                       2 KB

EEPROM                                                  1 KB

Clock Speed                                             16 MHz

















4.MOTOR DRIVER

L293D IC is a dual H-bridge motor driver IC. One H-bridge is capable to drive a dc motor in
bidirectional. L293D IC is a current enhancing IC as the output from the sensor is not able to drive
motors itself so L293D is used for this purpose. L293D is a 16 pin IC having two enables pins which
should always be remain high to enable both the H-bridges. L293B is another IC of L293 series having
two main differences with L293D.
PIN DIAGRAM OF LM293D






























CODES FOR LINE FOLLOWER


the following code is left priority code ie if all sensors will detect black line then it will go for left line. you can change the priority order by jst making slight change in code.




int motorLEFTpin1 = 5;              //define digital output pin no.
int motorLEFTpin2 = 6;              //define digital output pin no.
int motorRIGHTpin1 = 10;
int motorRIGHTpin2 = 11;
int irl2=2;
int irl1=4;
int irc=7;
int irr1=8;
int irr2=12;


int il2=0;
int il1=0;
int ic=0;
int ir1=0;
int ir2=0;

void setup () {
  Serial.begin(57600); 
  
  pinMode(irl2,INPUT);
  pinMode(irl1,INPUT);
  pinMode(irc,INPUT);
  pinMode(irr2,INPUT);
  pinMode(irr1,INPUT);
  pinMode(motorLEFTpin1,OUTPUT);        //set pin 5 as output
  pinMode(motorLEFTpin2,OUTPUT);        // set pin 6 as output
  pinMode(motorRIGHTpin1,OUTPUT);       // set pin 10 as output
  pinMode(motorRIGHTpin2,OUTPUT);        // set pin 11 as output
  delay(100);
}        

void loop()
{
  int c=0;
  int r=0;
  

  
  il2=digitalRead(irl2);
  
  il1=digitalRead(irl1);
  
  ic=digitalRead(irc);
  ir2=digitalRead(irr2);
  
  ir1=digitalRead(irr1);
  
  
  Serial.print("Raw Ratel2: ");
  Serial.println(il2);
  Serial.print("Raw Ratel1: ");
  Serial.println(il1);
  Serial.print("Raw Rateic: ");
  Serial.println(ic);
  Serial.print("Raw Rater2: ");
  Serial.println(ir2);
  Serial.print("Raw Rater1: ");
  Serial.println(ir1);
  Serial.println("\t");
  Serial.println("\t");
  

    
  if(ir2==LOW)
    r=1;
    
  if(ir1==LOW)
    r=2;

      if(ic==LOW)
        c=3;
  if(il2==LOW || il1==LOW)
      lft();
    
   else if(c>r)
       st();
          
      else if(r>c)
          rt();
      else if(il1==HIGH && il2==HIGH && ic== HIGH && ir1==HIGH && ir2==HIGH)
        lft();
    
    
    
  
}

void st()
{
   digitalWrite(motorLEFTpin1,HIGH);
   digitalWrite(motorLEFTpin2,LOW);
   digitalWrite(motorRIGHTpin1,HIGH);
   digitalWrite(motorRIGHTpin2,LOW);
}

void rt()
{
  digitalWrite(motorLEFTpin1,HIGH);
  digitalWrite(motorLEFTpin2,LOW);
  digitalWrite(motorRIGHTpin1,LOW);
  digitalWrite(motorRIGHTpin2,LOW);
}

void lft()
{
   digitalWrite(motorLEFTpin1,LOW);
   digitalWrite(motorLEFTpin2,LOW);
   digitalWrite(motorRIGHTpin1,HIGH);
   digitalWrite(motorRIGHTpin2,LOW);
}




ALL THE BEST GUYS..... IF YOU FIND ANY ISSUE, LET ME KNOW.

Monday 25 March 2013

ROBOTICS EYE( A SIXTH SENSE TECHNOLOGY)


                                     ROBOTIC EYE

                        (Object tracking using Sixth Sense Technology)


Now a day's humans are building advanced robots which are capable of doing any particular task, they are using different techniques to provide Artificial Intelligence to robots so that they can sense surrounding environment and can act accordingly. When compared to humans, intelligence of robots always lags behind.  Using sixth sense technology (Visually Controlled Machine), which makes use of “digital image processing”, we can provide artificial eye to the robot so that it can sense surroundings and react accordingly. Hence intelligence of the robots can be enhanced to certain extent. In this paper we are going to illustrate our project on sixth sense technology where we are controlling  the movement of robot based on the location of the object, mean to say we are tracking the target using the concept of sixth sense technology.
Index Terms—Sixth Sense Technology, MATLAB, Digital Image processing, robot, camera.
                                                                                                                           

 Sixth Sense Technology is a revolutionary way to augment the physical world directly without using dedicated electronic chips. Sixth Sense is a set of wearable devices that acts as a gestural interface and aggrandize the physical world around us with digital information and lets the users to use natural hand gestures to interact with the digital information through it. This technology gaining its popularity strength because of the usability, simplicity and ability to work independently in today’s scenario.
            
               The Sixth Sense Technology makes use of digital image processing techniques. There are different kinds of digital images. Ycbcr and RGB are the color images which consumes more memory with good clarity. The storage and transmission of colored is done in Ycbcr format which is less prone noise than RGB image. There are binary images which contain only two logic values either 0 or 1, intensity images which is a black and white image with varying brightness level ranging from 0 to 255.
                Any digital color image consists of RGB (Red, Green and Blue) components in different proportions. A digital image of resolution 640x480 means it contain 640x480 pixels in 2 dimensional plane. An each pixel has three planes (sub pixels) i.e. Red plane, Green plane and Blue plane. Each sub pixels is coded with 8 bit binary values to represent 256 brightness levels i.e. from 0 to 255. Different brightness levels (or 8 bit code) of these three sub pixels gives different colors. Each sub pixel consumes 8 bits or 1 byte memory. In each pixel there are 3 sub pixels, so each RGB pixel consumes 3 bytes of memory. A RGB image of resolution 640x480 consumes 921600(640x480x3) bytes of memory.
    
             A different colored object contains different pixel values. When a image is captured the position of required object is sensed or found by specifying pixel range of the required object. The colored image is converted into binary image , in which the pixel values whose values lies in the range of required objects pixel values are converted into binary logic value 1 (white pixel) and the pixel values colored image which are out of range of the pixel values of specified object are converted into binary value 0 ( black pixel). With the help of this binary converted image it is very easy to find the location or position of the object.
            The main difference between robot and the artificial intelligent machines is that, robots are not intelligent. They will not be having their own memory and they are not self- instructed. Using the above concept of Sixth Sense Technology we can provide eye to robot or machine which helps robot to see and sense to surroundings and act accordingly.


       The following principle will illustrate you the complete concept and working of our project.

    WORKING PRINCIPLE


        We make use of Sixth Sense technology for our project. In which we capture the image using webcam(wired or wireless) or any other camera. The more the sensitivity of camera the more resolution we get and hence we can use it for long distance sensing of objects. The object captured is processed using some techniques and then its location or position is found out. And corresponding command is given to robot.
      The following block diagram illustrates the method of working of our project.
block diagram of working principle












    There are four processes required to carry out task. And these processes are repeated at regular intervals of time.

                  i.            Recording Video:


         The camera which we need is interfaced to the computer or any processor which we make use. The camera is switched on using command "videoinput” in matlab. And the resolution of the camera and other parameters like picture format which camera supports is also done by using commands in matlab.
   Once the command is given the camera captures continuous video and gives it to matlab for further processing.

 
                ii.            Converting Video to Images:


   The images are extracted by the matlab at regular intervals of time from the captured video using command "getsnapshot". The time interval or delay between extracting further images is set by using command "pause".  

 
              iii.            Processing Images:

      The captured images are processed by matlab. First we have to confirm that the extracted images are in "RGB" format, if not we have to convert it into RGB format. Some camera does not support RGB format for those camera this step is necessary. We used the camera which support "YUY2"(ycbcr) format. To convert it into RGB, we used command "ycbcr2rgb"
YCBCR image format of extracted image
 RGB format of extracted image





































  Once we get the image in RGB format we have to get the sample pixel values of the required object. This step is performed for the first time for a particular object. If object is changed then this step is performed. We use the command "impixel" to take sample pixel values of the object.
         When we use this command we get a window which contains extracted image and a (+) mark cursor. By clicking the cursor on required places we get the pixel values at that position.
The sample pixel values are shown below. The first column show Red pixel values, second column shows Green pixel values and third columns shows Blue pixel values.


        67     51    161
        53     39    139
        40     31    104
        48     39    111
        38     27    110
        38     27    106
        62     61    164
        41     29    131
        44     34    106
         84    80   175


         By taking these pixel values we have to convert the RGB image to binary image. Pixel values of image which are in this range specified are converted into logic 1 (white pixel) and the pixel values which are out of this range are converted into binary value 0 (black pixel).
binary image of extracted image

. Dividing image plane into 9 region


























To find the location of the object and to give appropriate command to robot for its movement we divided the plane into 9 regions as show below. And these regions are labeled from 1 to 9. The labeling of regions depends on the position of object. The regions are relabeled if object position changes. The region which contains object is labeled as 1 and other regions are labeled in circular fashion, which will be explained clearly in next section.
        The pixel values of the image are stored in matlab in matrix form. For example, if the resolution of the image is 640x480, then the pixel values of image are stored in matlab in matrix of size 640 columns and 480 rows. In the above figure Cmax represents the maximum column number i.e. 640 in this example and Rmax represents maximum row number i.e. 480 in this example.  Here 'c=Cmax/3’, 'r=Rmax/3'.In this way the plane is divided into 9 regions.
        To find the location or position of the object in the plane we count the number of white pixels (binary value 1) of the binary image in all regions. The region with highest number of white pixels is the position of the object in the plane. And then we send appropriate command to robot.  For example, if maximum number of white pixel is in region 9 then the location of object is region 9 and robot is given with command  to stop .    
                  i.            Computing & sending commands to robot:

       Now, we see how to compute the angle through which the robot should turn clockwise or anticlockwise to follow or track  the object. At the beginning the regions from 1 to 9 are labeled from 1 to 9 respectively. First we are going to find the region of location of object and its label value. Then we calculate the angle through which the robot should be rotated and what appropriate commands should be given to robot   by taking  label value using following steps.
        If label is 9 then the robot is given command to stop. If label value is greater than 4, then robot is rotated in anticlockwise direction through the angle, calculated as,
              angle=(9-label value)*45 degrees
        After rotating through this angle the robot is given command to go forward.
         If label value is less than or equal to 4 then robot is rotated in clockwise direction through the angle, which is calculated as,
               angle=(label value -1 )*45 degrees
        After rotating through this angle the robot is given command to go forward.

       For example, if object is found to be in region 4 which is labeled as 4, then robot is rotated in clockwise direction, since label value is equal to 4. The angle of rotation is,
angle = (4-1)*45 = 135 degrees
       After rotating through 135 degrees the robot is given command to go straight. After that the regions are relabeled with the region 4 which is the location of the object labeled as '1'.

            EXPERIMENTAL RESULTS

              We have simulated this work in MATLAB 7.12.0 and implemented using microcontroller (Atmega).

               If object found in region 2 which is labeled as 2, then robot instructed to rotate clockwise through the angle of 450. After rotating through 450 the robot is instructed to move forward. The simulation results of this step is shown in figure 9 and 10. After this step the regions are relabeled as shown in figure 11.

             For next interval the object is made to lie in region 5 which has label value '4'. Then robot is instructed to rotate in clockwise direction through the angle 1350. After rotation the robot is instructed to move in forward direction. The simulation results of this step is shown in figure 12 and 13. After this step the regions are relabeled as shown in figure 14.
   
              Again for the  next interval of time the object is made to lie in the region 4 which has label value '8'. Then the robot is instructed to rotate in anti-clockwise direction through the angle 450. After rotation the robot is instructed to move in forward direction. The simulation results of this step is shown in figure 15 and 16. After this step the regions are relabeled.

             In this way the process is continued at regular intervals of time, to track the object and to give appropriate commands to the robot.
 binary image containing object in region 2

Simulation results when object is in region 2 (label 2)

Relabeling regions when object is in region 2(label 2)
                                                                                                                                                             

    APPLICATIONS


1.       This project can be implemented in bomb diffusing robots where human presence is dangerous.
2.       This project can be implemented in colored object tracking.
3.       This project can be made use in industries for colored object separation.
4.        This project can be used in tracking the ball (focusing camera automatically) in sports like tennis, cricket, football etc.
5.       This project can be implemented in gesture controlled robotics.
6.       This project can be implemented in security systems like thief detector . This can be done by installing camera when no one is present at home. The entry of thief is detected by verifying drastic pixel changes in the captured image.

      CONCLUSION

          Now a day's sixth sense technology is new emerging and interesting field having lot of scope. Lot of research and development work is being carried out in this project. This technology provides visual sense (eye) to robots which are capable of doing almost human jobs. Now a day's key boards & scroll balls are getting replaced with touch screen & touchpad's. Using this technology we can replace touch screen with virtual mouse and virtual key boards. Lot of research work like background subs traction and other things are taking place to overcome some of the disadvantages present in this technology.