Humanoids Website for Angel Macias

Test

Assignment 2

MATLAB Code

YouTube Video

Writeup

I first started the assignment by thinking about placing a template all over the map, but I didn't like the idea of taking pixel measurements and then trying out the template everywhere. I felt like it would take the program too long and felt very hard-coded. So I instead used the intensity filter that Professor Atkeson provided and heightened the threshold to remove a little more. Then I added my own filter to remove any straggling pixels that were alone and away from the block, leaving only the block. I provided a video for you to see and understand what I talk about next. My program then detects any edges by checking if a pixel has any neighbors that are 0 (due to the intensity filter) and marks them green on the image and then stores them in a matrix. After creating that matrix I find the min and max values within it, which are the corners of the rectangle that inscribes the block in the image and then I draw it and make matrices out of those lines in the image. If any of the edges in the edge matrix are within the lines I make them the corners (in the code they're called tangentTop, tangentBot, etc. since they're the points that touch the line) and plot them as stars. Next I find the centroid by finding the midpoint between the corners of the inscribing rectangle and plot it with a red star.

 

Next is the orientation of the block, which I did use a template of sorts. In the video, after the rectangle and corners are drawn, a couple of green lines are drawn. If the green lines match up with the block well then the block is either in 0 rads or pi/2 rads. I check which one by seeing if the bottom line is longer or not, if it is then the block is at 0 rads, else the bottom line is shorter and the block is standing up and at pi/2 rads. The line that's longer and shows that the block is at 0 or pi/2 rads then lights up to yellow. I used a template for this because due to the flatness of the block in this position, there are too many tangential points to the line I talked about earlier, which confuses the corners.

 

After the template I check the angle between two points, the top and left or bottom and left, depending on which side is the longer side of the block. In the video you can see that a red line is made between the two points. Then I use atan2 with the differences in y and x between the points to find the angle of orientation of the block, which then finishes the program.

 

I went through that process rather sequentially in that I didn't have to backtrack and change the design (for the most part), besides the beginning when I tried using a template. Though there was a big issue in trying to get the corners at first. I tried finding a group of edge pixels that were two intersecting lines, but the code for it seemed to complicated, and I felt like there was a better way. I then realized that I created a rectangle with the outer corners that inscribe the block and used to the edges to find tangential edge points that became the corners of the block.

 

Question 1)

The accuracy of the centroid is probably the most accurate measurement in my program since I first find the rectangle that the block in the image is inscribed in, then take the centroid of that. So probably to within a couple millimeters. The corners of the block that I measure out tend to be a bit off due to how much I filter the image. Some of the images have some of a corner or side filtered out so the corner my program computes isn't the true corner, but one that is very nearby(usually), but in some cases a bit far away, probably to within a centimeter or two. The angle is taken along the long side of the board which I think is accurate to a rather small error, perhaps to within .1-.01 rads.

 

Question 2)

I didn't have to use the depth array for centroid and theta part of the first assignment, just the rgba images. However, missing depth values could be treated in the same way that filtered out rgba values (which are also basically 0) in that we ignore them considering how many data points we still have.


 

 

 

 

Assignment 1

MATLAB Code

YouTube Video 

Writeup (Right click and save if you wish to download it)

CMU 16-264 Humanoids IK Optimization Writeup

Angel Macias

Writing the MATLAB Code

I first took a while just reading what I was given and trying to understand how initial parameters were set up and what they did and how they formed the Arm in its resting position. Then I worked on making a lot of the variables global for better accessibility, such as the p_d and initial hand position along with some new globals like pre_grasp_d (MY desired pre_grasp position). I just had the main program just run maincon and draw the resulting answer which led the hand straight to the objective, which was pretty simple. 

Then I created a way to give a pre_grasp position by aligning a vector to the x_d angle and subtracting/adding(based on quadrant) a small offset so that the fingers don’t run into the object.

After the pregrasp I created a trajectory function that uses linear interpolation to give x_d points for the hand to go to on the way to the pre_grasp destination with a relatively smooth motion. After reaching the pregrasp position the hand waits a bit then moves forward to finish the objective. I’ll just briefly add here that my though process for this part was a bit odd in that I didn’t think of linear interpolation at first, but instead thought of having the joints move directly to pre_grasp position at a given rate by computing each individual part’s distance to the goal then divide by an artificial rate to give how many increments to move in. That initial method was going to be rather tedious so I decided to use the linear interpolation method I learned in class since it’s so much more elegant.

The criterion function is basically the same as what was given to us for the previous example files, which makes sure that the hand is aligned and in the same are. However, I added a small bias of trying to keep the joint angles the same as the leading angle.

What worked rather well was getting the arm to the pre-grasp position smoothly then to grasp. What doesn’t seem to work too well is trying to get the arm to other quadrants, though I suppose that’s due to the lb and ub constraints of the angles that force the arm to stay in a general area. What’s rather difficult to get to work is to check if the angle of the target is unreachable without trying to get there first.

1)

First ignoring the angle position, we know that the target can’t be reached when the target is further than the arm can reach fully extended. Factoring the angle in, if the arm has to bend too much to reach it, then clearly it can’t be reached. Meaning that as you reach the object, if each angle is reaching either an upper or lower bound, then the object can’t be reached. The optimizer being stuck at a local minimum means that the object can’t be reached, but we didn’t realize beforehand and didn’t stop the optimizer from trying to reach an unreachable object.

2)

We can first make a max circle with the radius being the lengths of the arms combined. The inner radius would be the joints all going to the max upper limit and then measuring how close the tip is to the origin. Then the distance from the origin is the radius for the inner circle, creating a donut type range for the (x, y) of the tip of the arm. The angle is a much more difficult thing to figure out. I suppose if we discretize the area bit, we can just have the arm try out segments of the area with various angles in that segment, and if the arm can’t reach it, just add it to a list of unreachable (x,y,angle) points. It seems a bit naïve, however I think it may be the way that Humans do things, as in, we try to reach an object and if we have to grab the object with our hands in a certain angle, we try to place our hands in that angle until it works or we just give up.

3)

The arm can reach a bit into the 4th quadrant now, though not completely. Objects that are closer to the +x-axis with an angle range of around .4< ᶿ < .6 (rads) would probably accessible now due to this range change. I’m not sure if I can rigorously prove the mathematics for this, considering how difficult it seems to be based on what I’ve seen and read online, but essentially it looks like a chunk of the 1st Quadrant has been made in accessible and placed into the 4th quadrant for accessibility.