Ford Motor Company Autonomous Vehicle Sensor Fusion Engineer in Ann Arbor, Michigan
Autonomous Vehicle Sensor Fusion Engineer
Job Description & Qualifications:
At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow’s transportation.
Ford Motor Company is committed to expanding our core business from auto manufacturer to, the much broader, mobility solutions company. Autonomous Vehicles will play a major role in many of these upcoming mobility solutions. In fact, Autonomous Vehicles will play a key role in the future of Ford, in the future of transportation, and in the future of how people interact with their world.
This future is being created today by Ford’s AV LLC team. This fun, fast-moving, innovative group of highly skilled and motivated people is looking for candidates to support research and developmental efforts in producing fully-autonomous vehicles (SAE Level 4).
The “Autonomous Vehicle Sensor Fusion Engineer” position entails developing, implementing, and testing sensor fusion algorithms for object detection, classification, tracking, and intent prediction. Combining sensor data from LIDAR, RADAR, and camera systems, these perception algorithms allow an autonomous vehicle to understand the nearby environment, and provide the means for safe operation of the vehicle in simple-to-complex real-world situations. Development of these situational awareness algorithms include such diverse topics as: multi-sensor object detection/classification, simultaneous tracking of multiple objects through complex scenes, and multi-sensor fusion including the combination of each sensor mode as well as the as fusion of on-board and off-board data.
Ideal candidates will have strong grasp of computer science fundamentals, solid background in at least two of the aforementioned sensors, a history of implementing sensor fusion algorithms, and experience with deploying perception solutions for autonomous vehicles.
What you’ll be able to do:
Design, implement and test algorithms for multi-sensor fusion to support perception/situational awareness, object detection, object classification, simultaneous tracking of multiple objects through complex scenes, and the prediction of the object future position (i.e. intent).
Support the calibration, testing, interface specification, and integration of a variety of sensor data (Camera, LIDAR, RADAR, Ultrasonic, GPS/IMU, etc.) into the perception algorithms for prototype-to-production system development.
System integration of data acquisition and data processing from a variety of sensors including camera, LIDAR, RADAR, ultrasonics, etc.
Develop, implement and utilize specific metrics to quantify the performance of sensor fusion techniques for detection, classification, and tracking algorithms.
Design and execution of experiments, data collections, data analysis, and performance evaluation from both simulation and with real vehicle data.
Utilizing hardware-in-the-loop, software simulation, and in-vehicle experimentation for algorithm testing and software validation
Presentation of designs, challenges, implementation details, and results during periodic reviews and technical interchange meetings
Minimum requirements we seek:
Master’s degree in Robotics, Electrical Engineering, Signal Processing, Computer Science, or similar field or, 10 years equivalent work experience.
Three or more years’ experience with algorithm development and implementation in at least three of the following areas: multi-sensor fusion, image processing, remote sensing, object detection/classification, intent prediction, model-based estimation techniques and/or multi-target tracking.
Three or more years’ experience developing high-quality C/C++ code of object detection and tracking algorithms, multi-threaded applications, and data visualization.
Our Preferred Requirements:
Experience working in a team-based project from inception to demonstration across multiple disciplines, e.g., hardware design and implementation, data acquisition and analysis, mathematical modeling, algorithm development, application implementation, etc.
Experience with multiple sensor systems such as LIDAR, RADAR, cameras, etc.
Experience with probabilistic state estimation, Bayesian inference, Kalman filtering, and non-linear optimization techniques
Strong foundation in linear algebra, vector analysis, and probability/statistics
Working knowledge of an inter-process communication method (such as ROS, LCM, ZeroMQ, etc)
Proficiency in multiple operating systems such as Windows, UNIX, Linux, etc.
Experience developing custom software within a large codebase, including proficiency with software version control systems (e.g., git or svn), code reviews, and style guidelines
Experience with Linux development using tools for code debugging and profiling
Experience with system requirements, testing, validation, and Agile software development
Demonstrated ability to generate scientific reports and presentations
What you’ll receive in return:
As part of the Ford family, you’ll enjoy excellent compensation and a comprehensive benefits package that includes generous PTO, retirement, savings and stock investment plans, incentive compensation and much more. You’ll also experience exciting opportunities for professional and personal growth and recognition.
If you have what it takes to help us redefine the future of mobility, we’d love to have you join us.
Visa sponsorship may be available for this position.
By choice, we are an Equal Opportunity Employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, color, age, sex, national origin, sexual orientation, gender identity, disability status or protected veteran status.
Auto req ID:
Ford Motor Company
Autonomous Vehicles (AV)
Ford Motor Company
- Ford Motor Company Jobs