Your needs

Designing an optronic system

Prototype a complete vision system : computer + sensors (vision, radar, lidar, etc.)

Do you need to design a complete embedded system to solve your problem (detect, recognise, identify, trigger an action, etc.)?

We’ll take your project apart and design a system including a high-performance neural network-based computer (CPU/GPU/NPU/FPGA: Nvidia AGX Orin, Xilinx Ultrascale+/Versal) and multi-sensor fusion [cameras (UV, visible, night/VNIR, SWIR, Thermal MWIR & LWIR, THz), radar, lidar].


Our experience
: space, defence, aeronautics, industry, agriculture.

Designing an embedded vision computer

Do you already have a sensor system but need a powerful on-board computer for image processing and analysis?

We design your custom high-performance computer based on neural networks (CPU/GPU/NPU/FPGA: Nvidia AGX orin, Xilinx Ultrascale+/Versal) according to your SWaP-C constraints so that it integrates perfectly into your system.

Our experience : space, defence, aeronautics, industry, agriculture.

Design an evaluation kit or reference design for a sensor or processor

Are you an image sensor or processor manufacturer?

We design an evaluation kit for your processor or a reference design to enable potential customers to test your device.

Buying an IP

Are you looking for an IP for image processing or analysis?

If you need to port an image processing or analysis algorithm to an FPGA or other processing architecture (CPU/GPU/Many core/NPU/DLA), we’ll design and implement it for you.

  • HOW CAN WE HELP YOU ?

OPTRONICS EXPERTISE

Embedded vision systems design

PRODUCTS

A complete ecosystem for situational awareness

Questions to ask yourself when planning a vision project

  • What do we want to see?
    • I want to see :
      • from a distance
      • at night
      • through diffusive media (fog, snow, underwater, etc.)
      • through matter
      • quickly
      • in all spectra (other than the visible > see the invisible)
      • at very high frequency (see at 10,000 fps)
      • see with a very wide field of vision (not far) or very far (very narrow field of vision)
      • All the time (e.g. 24 hours a day)
  • What do we want to analyse?
    • E.g.: I want to recognise the material of which what I am observing is made.
    • DRI
  • What time frame?
    • How often do I want to do this?
  • Are there any particular constraints linked to the environment?
    • Pressure (e.g. in space, in deep water, etc.)
    • Temperature
    • SWaP-c (Size, Weight, Power, Cost)
    • Form factor :
      • AR helmet
      • Single-sensor onboard camera
      • Multi-sensor on-board camera
      • Sensor card design
      • Gyro ball
  • Is there a need to return the information or image to a human being?
    • If so, in what form (images, alert, etc.)? If not, what action should be triggered (system interfacing)?