Weekly+Spaceless

Test Topic Links of World War II [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] []

Lisa Sheldon Matje's tasks

//Aircraft Missions on World War II & Bla-bla-pla//
[] [|http://www.hoxity.de/papercraft/marauder_main.html#hamilton_heymaker2] [] [] [] [] [] [] [] [] [] [] [|http://en.wikipedia.org/wiki/List_of_aircraft_of_World_War_II#Jet-_and_rocket-_propelled_fighters] [] [] [] [] [] [] [] [] [] unlimited reliable sources for Aircraft of World War 2 ^ = = =Paper Aircraft!= [] [] Google this:"world war 2 __#|paper__ aircraft pdf" []

[] [] How Stuff Works? Aircraft history Right fighter Twin engine fighter Bomber interceptor High altitude interceptor Rocket interceptor Mixed power Piston and jet engine Maritime patrol Jet bomber Trimotor Torpedo dive bomber and lots and lots to research.

News===

Robotic Aircraft Controlled by Human Hand Gestures
Aircraft __#|carrier__ crews already use a set of hand gestures and body positions to guide pilots around the deck. But with an [|increase in unmanned planes], what if the crew could use those same gestures to guide robotic aircraft?

A team of researchers at MIT — Computer Science student Yale Song, his advisor Randall Davis and Artificial Intelligence Laboratory researcher David Demirdjian — set out to answer that question.

They’re [|developing] a Kinect-like system (Microsoft’s [|Xbox 360 peripheral] wasn’t available when the team started the project) that can recognise body shapes and hand positions in 3-D. It uses a single stereo camera to track crew members, and custom-made software to detect each gesture.

First, it captures a 3-D image of the crew member, and removes the background. Then, to estimate which posture the body is in, it compares the person against a handful of skeleton-like models to see which one fits __#|best__. Once it’s got a good idea of the body position, it also knows approximately where the hands are located. It zeros in on these areas, and looks at the shape, position and size of the [|hand] and wrist. Then it estimates which gesture is being used: maybe the crew member has their palm open or their fist clenched or their thumb pointing down. The biggest challenge is that there’s no time for the software to wait until the crew member stops moving __#|to begin__ its analysis. An aircraft carrier deck is in constant motion, with new hand gestures and body positions every few seconds. “We cannot just give it thousands of [video] frames, because it will take forever,” Song said in a press release. Instead, it works on a series of short body-pose sequences that are about 60 frames long (roughly three seconds of video), and the sequences overlap each other. It also works on probabilities rather than exact matches. In tests, the algorithm correctly identified the gestures with 76 percent accuracy. Pretty impressive, but not good enough when you’re guiding [|multimillion-dollar drones] on a tiny deck in the middle of the ocean. But Song reckons he can increase the system’s accuracy by considering arm position and hand position separately.