Vision

Interested in Robotics? Here's the place to be.
Post Reply
hlreed
Posts: 349
Joined: Wed Jan 09, 2002 1:01 am
Location: Richmond, TX
Contact:

Vision

Post by hlreed »

I am back to working on vision. I get to a point and have to stop. Then back again. Vision must be solved before robots can be freed.
I have mechanism to find objects, and object size in one dimension. This uses ISNodes to find objects and GNodes to find size. (Color is a separate property that requires only triplicating sensors with 3 filters. Then you can add up total light for the main computations.
Detection is SeenObject = SL is SR where SL and SR are ISNode trees. Size is the number of layers in the tree. (Light amplitude is value of SeenObject if it has a value.)
So ThisObject = SeenObject + size + color + loc<p>Fine. So what do I do with Thisobject? There will be lots of them.
In the robot, ThisObject will always have more properties added, becoming a new object.
Thisnewobject = ThisObject + sound ; for example.<p>Discussion can be on ISNodes and mechanism if you wish, or whatever.
Harold L. Reed
Microbes got brains
bwts
Posts: 229
Joined: Tue Jun 11, 2002 1:01 am
Location: britain
Contact:

Re: Vision

Post by bwts »

Surely the function of the robot will determine was is to be done with Thisobject. Maybe Thisobject could B further split into Thisdangerousobject + Thisusefulobject depending on wot it is the robot would like to do with objects!<p>B)
"Nothing is true, all is permitted" - Hassan i Sabbah
hlreed
Posts: 349
Joined: Wed Jan 09, 2002 1:01 am
Location: Richmond, TX
Contact:

Re: Vision

Post by hlreed »

B, you are in the middle as usual. With objects the robot can seek, or avoid objects instead of seeking or avoiding light. Objects are defined by their properties. Good objects and bad objects are determined by the properties good and bad, which in turn are defined by other properties.<p>What I am looking for is the simplest architecture which will do the whole vision thing.
(Identifying the figure in figure and ground.)<p>Help!
Harold L. Reed
Microbes got brains
bwts
Posts: 229
Joined: Tue Jun 11, 2002 1:01 am
Location: britain
Contact:

Re: Vision

Post by bwts »

Ive always liked the idea of using feelers instead of eyes. Something I intend 2 explore in more depth. I have some info on electronic vision somewhere Ill dig it out + u can c if any of its relevent.<p>B
"Nothing is true, all is permitted" - Hassan i Sabbah
hlreed
Posts: 349
Joined: Wed Jan 09, 2002 1:01 am
Location: Richmond, TX
Contact:

Re: Vision

Post by hlreed »

Vision was invented in the cambrion explosion of life. The competition eat up all blind beings and they had to have vision to survive. That is going to be true of robots soon, although they do not eat each other yet.
All vision texts want to make a movie of what you see and then you have to see that. All this is wrong or not useful.
Thanks for the comment.
I will have more soon.
Harold L. Reed
Microbes got brains
bwts
Posts: 229
Joined: Tue Jun 11, 2002 1:01 am
Location: britain
Contact:

Re: Vision

Post by bwts »

stereo vision is better than mono thats all i have 2 say on the subject 4 now :) <p>B
"Nothing is true, all is permitted" - Hassan i Sabbah
bwts
Posts: 229
Joined: Tue Jun 11, 2002 1:01 am
Location: britain
Contact:

Re: Vision

Post by bwts »

just in case u havent read everything on electronid vision have a peep at
http://www.edmundoptics.com/TechSupport ... icleid=286<p>B
"Nothing is true, all is permitted" - Hassan i Sabbah
chessman
Posts: 292
Joined: Tue Jan 14, 2003 1:01 am
Location: Issaquah, WA
Contact:

Re: Vision

Post by chessman »

I'm not sure what devices you're using for vision Harold, but I have a little idea.<p>Take a laser diode (any wavelength) and put it through a diffraction grating. The pattern reflecting off objects would be something like:<p> x
x x
x x x
x x
x<p>A CCD camera would have a filter on it, the same wavelength of the laser (duh....). Relatively simple code could be used to measure the distance between dots.<p>Using this technique, you could essentially create a "3D" map of surroundings. Obviously, the pattern could be tailored to your needs.<p>There's a good article at the Seattle Robotics Society about robotic vision...I forget the link but it's not that hard to find.
hlreed
Posts: 349
Joined: Wed Jan 09, 2002 1:01 am
Location: Richmond, TX
Contact:

Re: Vision

Post by hlreed »

Thanks chessman. What I am using now are light sensors I get at Jameco. They have a built in lense, a wide field of view and are the closest to vision I have found (800). It only takes 4 of those to separate ground and figure (in one spot).
I assume that figure is a patch that is similar. That is my definition. Given that, the mechanism is to compare sensors. Given sensors A, B, C, D in a line. If B = C we have a spot in the center. If C = D we have a spot at right and A = B is spot at left, all of length 1. If A = B = C we have a length 2 spot at left. If A = B = C = D we are looking at a wall, length 3. (Object is larger than our sensor field.)
Augment this with 4 upright sensors and we can combine lengths into rectangles.
This is how it starts out. What comes out of this is a stream of rectangular figures. Call these objects. Along with these we have other data from sound and touch in the same time interval. All these become properties of the object.
An object with properties is a concept, so we have now a stream of concepts. These can be compared with other streams and so on, but I really don't know how to make these into actions.
That is where I am now.
Harold L. Reed
Microbes got brains
dribach
Posts: 11
Joined: Wed Sep 10, 2003 1:01 am
Contact:

Re: Vision

Post by dribach »

chessman has the right idea, but instead of diffracting the laser into dots, pass it through a line generator (basically a glass cylinder) to make a horizontal line. then mount the CCD camera sideways a little bit above the laser. what you'll get is an image of a vertical line on the screen. the top of the screen is to the left of the robot, the bottom of the screen is to the right. the further to the right of the screen the line gets, the further away the object is. this will take care of anything that's in the robot's way as long as the laser hits it. this should be easier to decode than the dots, though it's only useful on a floor, it wouldn't be very good for, say, a helecopter.
hlreed
Posts: 349
Joined: Wed Jan 09, 2002 1:01 am
Location: Richmond, TX
Contact:

Re: Vision

Post by hlreed »

Thanks dribach. Getting the image is solved. I have pixels. What I am trying to do is the simplest computation that will get figure separated from ground. That is, going from an object length to building the object from the spots and working with objects now instead of pixels. Every method I have used requires an awful lot of machinery, and it is probable there is no simple way, except to use a lot of machinery.
pixels -> maybe object and location of may be.
maybe object -> object with properties at location. Eyes sweep from maybe object to maybe object, remembering maybe object and location. Eventually this must go into fixed and moving objects. Doing this is expensive, requiring lots of nodes. (Even for simple, single maybe object detectors.)
Harold L. Reed
Microbes got brains
Post Reply

Who is online

Users browsing this forum: No registered users and 33 guests