News
News
  • How and why electric vehicles will change the way cars look

    How and why electric vehicles will change the way cars look

    2019-08-23
    Once a novelty, electric vehicles (EV) have moved into the mainstream thanks to high-profile companies like Tesla and best-sellers like the Nissan Leaf. These cars are feature-packed and technology heavy, but the innards aren’t the only part of the car that’s changing. These cars look different on the outside too. The changes to the inside are driving some of those made to the exterior. Electric power requires fewer moving mechanical parts but requires more electronic parts. To account for these changes, vehicle designers are reimagining what a car looks like, and what a car can do. While many of these changes are evolutionary, some are quite revolutionary, too. Grille-less fronts, ‘frunks,’ and sensor-coated bumpers It’s easy to spot an EV approaching you on the road because the front end of electric cars looks different from gas-powered vehicles. Fewer moving mechanical parts are needed in an EV – and in most cars a majority of those parts are found up front within the engine bay. All that freed up behind the front wheels is available for use elsewhere. “With added freedom in the absence of an engine up front, we can expect manufacturers to get really creative in complete redesigns of the front end,” CARiD product training director and former automobile engineer Richard Reina says. One of the most noticeable differences between electric and combustion engine-powered vehicles is the elimination of the grille. Electric powered-cars require far less ventilation than is usually required for a radiator. While these cars still need to ventilate heat, it’s nowhere as much as that produced by a traditional combustion engine. It also doesn’t require oil lubrication, which means designers can eliminate a large portion of lubrication systems as well. Reina thinks that this could lead to more EV manufacturers adding a “frunk,” and providing additional storage space in the front of the car. But could cars lose their front ends altogether? Some might. Look at Volkswagen’s prototype bus. Other companies like Toyota are also planning EVs with smaller front ends. The bumper itself, and its importance to the car, will also change. With more autonomous driving features becoming more common, the front bumpers (and rear bumper) will be positively lined with sensors. Side view mirrors will also disappear or shrink considerably, replaced instead with cameras (if the law finally allows them. With LED light technology improving, the big headlights of the past will also morph into small slits or dots, perhaps built into the hood or front bumper. On the inside: Roomy and tech heavy The elimination of moving parts will also allow EV manufacturers to increase the size of the interior of the vehicle without the need to increase its overall size. This will mean plentiful legroom for all passengers, as well as a large trunk. With autonomous driving taking over during the next decade, the standard front-facing seating arrangement could very well be no more. Since the car is dr...
    VIEW MORE
  • Smart City Expo World Congress 2019

    Smart City Expo World Congress 2019

    2019-08-22
    Fair facts Date 19 – 21 November 2019 Venue BARCELONA | Gran Via Venue VISITORS 25,000+ EXHIBITORS 1,000+ SPEAKERS 400+ COUNTRIES 140+ CITIES 700+ SIDE EVENTS 65+ About Smart City Expo World Congress 2019 CITIES MADE OF DREAMS Someone once said that "the future belongs to those who believe in the beauty of their dreams." And we think our dreams are not only seductive and radiant, but possible. We believe that a city is not gauged by its infrastructures, its skin, its digital aesthetics, but by the height of its vision to respond to the needs of all citizens, without leaving anybody behind. Smart City Expo World Congress started back in 2011 with a vision, questioning whether smart initiatives could make sustainable cities flourish. Not because it was a nice thing to do but because environmental footprints were quite alarming. The data speaks for itself. The special report on Global Warming issued by UN-IPCC on October 2018 warned us that the time to act is now. There are only a dozen years to limit global warming to 1.5ºC and cut risks of extreme heat, drought, floods and poverty. We do know that roughly two-thirds of the global population will be living in urban areas by 2050. So, how do we brace for impact? We have come a long way since 2011 and the smart city opportunity has become a reality. Today we are happily experiencing this hands-on feeling for the results of the work undertaken. Stakeholders are moving from small proof-of-concept projects to smart implementation at scale. New governance models and new approaches to equity and a circular economy have also emerged along with IoT, Artificial Intelligence, drones, self-driving cars and new forms of micromobility. New ways of processing and distributing information such as blockchain and IOTA have also come into the picture. The future is not far-off anymore, the future is now. Cities have become socio-economic and political actors on national and world stages and have a major impact on the development of nations. Yet we need to keep on exploring new paths, reinventing places and scenarios, drawing new cartographies of imagination, as we still have the opportunity to make things happen just the way we need them to be. NOW, WHAT'S NEXT? At Smart City Expo World Congress 2019, we dare to keep on dreaming of a smart urban revolution, since we still need green and liveable cities that reflect a strong sense of responsibility to future generations; cities in which public transport coexists with new mobility options; cities that address both security and privacy concerns; inclusive cities where collaboration becomes a central focus to build a better future; cities that look beyond and are prepared for expected changes and unexpected ones. However, to make things click, we need citizens who dream of a better tomorrow, along with the public and private sectors, civil society, and diverse organizations, as well as academia. We are all implicated. OUR 2019 EDITION Smart City Expo World Congress 2019 ...
    VIEW MORE
  • 2019 Shanghai International Trade Fair for Automotive Parts, Equipment and Service Suppliers

    2019 Shanghai International Trade Fair for Automotive Parts, Equipment and Service Suppliers

    2019-08-22
    Facts & figures Automechanika Shanghai is part of the 'Automechanika' brand. It welcomes industry players from the entire automotive supply chain to participate and expand their business on a global scale. Fair facts Date 3 – 6 December 2019 Venue National Exhibition and Convention Center (Shanghai), China Exhibition space 360,000 sqm No. of visitors (estimate) 160,000 No. of exhibitors (estimate) 6,320 Product groups Parts & components Components for conventional drive systems Chassis Body Standard mechanical parts Interior Exterior Charging accessories 12 volt Regenerated, restored and renewed parts for cars and utility vehicles External vehicle air quality and exhaust gas treatment New materials Electronics & connectivity Engine electronics Vehicle lighting Electrical system Comfort electronics Human machine interface (HMI) Connectivity Internet of things Accessories & customising General accessories for motor vehicles Technical customising Visual customising Infotainment and Car IT Special vehicles, equipment, assemblies and modifications Car trailers and small utility vehicle trailers, spare and accessory parts for trailers Merchandising Diagnostics & maintenance Workshop equipment for repair and maintenance Tools Digital maintenance Vehicle diagnostics Maintenance and repair of vehicle superstructures Towing equipment Workshop equipment for repair and maintenance for alternative drive concepts Fastening and bonding solutions Waste disposal and recycling Workshop safety and ergonomic workshops Workshop and dealership equipment Oils and lubricants Technical fluids Workshop concepts Dealer & workshop management Workshop / dealership / filling station planning and construction Dealer, sales and service management Digital marketing Customer data management Online presence E-commerce and mobile payment Basic and advanced training and professional development Workshop and dealership marketing Online service providers and vehicle/parts/service marts Economic regeneration, research, consulting, cluster initiatives Car wash & care Washing Vehicle care Vehicle preparation and detailing Water reclamation, water treatment Filling station equipment Alternative drive systems & fuels Energy storage Alternative fuels Complementary products Vehicle concepts Resources Charging and tank technologies and systems New workshop technologies REIFEN (Tyres & Wheels) Tyres Wheels and rims Tyre/wheel repair and disposal Used tyres and wheels Tyre/wheel management and systems Sales equipment and storage of tyres Accessories for tyres, wheels and installation Body & paint Bodywork repairs Paintwork and corrosion protection Smart repairs for paintwork, metal parts, plastic parts, windows, headlights, rims New materials Mobility as a service & autonomous driving Mobility services Automated driving Fleet management / leasing / corporate mobility Others Industry institutions Publishers Official Website Learn more Visitor Registrat...
    VIEW MORE
  • How has 5G changed our lives?

    How has 5G changed our lives?

    2019-08-19
    It’s still too early to experience the real benefits of 5G. Current 5G deployments are limited to just a few neighborhoods in the largest cities, and even there it’s difficult to find a stable 5G signal. None of the truly transformative changes in tech thanks to 5G are possible with such spotty coverage. You won’t be waiting too long. The major wireless carriers all expect to have a sizeable number of customers on 5G networks by 2025, and the tech industry is already developing next-generation technologies to take advantage of an always-on, super high-speed connection. What will this future look like? We spoke with close to a dozen futurists and technology entrepreneurs to get their predictions on what 5G will look like in the year 2025. From smart cities to smarter homes, to significant advances in artificial intelligence — a lot is about to change. SMARTER CARS Our cars will become smarter, as they’ll be able to ‘talk’ with one another and these traffic management systems at large. Expect by 2025 to have fewer ‘cars’ on our city roads,Adoption of self-driving vehicles, 5G, robot taxis, and a growing gig economy will combine to change how we see cars. The cars that do remain on the road will have more sensors than ever. These sensors won’t just help you park, stay in your lane, or avoid accidents anymore. With 5G, they’ll be interconnected. This opens up a whole new world of possibilities which will all make driving safer, quicker, and less stressful. SMARTER CITIES AND SMARTER HOMES Traffic congestion is worsening as cities grow. Statistics show that average commute time continues to increase, and will continue with more cars on the road. There is a significant need for traffic management, according to experts. Robust 5G services may soon enable decidedly futuristic-sounding applications. A.I.-assisted traffic management systems and just-in-time communications will transform the way we move within our cities. Such a system could theoretically make traffic jams a thing of the past. Artificial intelligence would help manage traffic on a regional level. 5G and A.I.-enabled traffic control together could proactively adjust speeds on highways to keep cars moving or automatically divert traffic around incidents. Cars entering the road could be metered, helping to control traffic flow. Going further, smart power grids will improve energy efficiencies, and improved security systems will keep us safer than ever before. The bandwidth requirements for these applications are far too high for existing network infrastructure, but small cell technologies may soon enable a veritable world of possibilities. Smart homes will also get better. Bandwidth has always been an issue. By addressing the coverage issues that happen with Bluetooth, Wi-Fi, or other communications technologies, 5G allows more devices to go online. FASTER SPEEDS ON YOUR PHONE 4G’s fast data speeds jump-started the app revolution. It’s still not fast enough to handle truly data-inten...
    VIEW MORE
  • New Smart Memory Controller, Breaking Through the Memory Bandwidth Bottleneck

    New Smart Memory Controller, Breaking Through the Memory Bandwidth Bottleneck

    2019-08-12
    Microchip's new SMC 1000 8x25G serial memory controller enables CPUs and other compute-centric SoCs to utilize four times the memory channels of parallel attached DRAM within the same package footprint compared to DDR4. The SMC 1000 8x25G enables higher memory bandwidth and media independence for High Performance Computing (HPC), big data, artificial intelligence and machine learning compute-intensive applications with ultra-low latency. The SMC 1000 8x25G interfaces to the CPU via a narrow 8-lane differential Open Memory Interface (OMI)-compliant 25 Gbps interface and bridges to memory via a wide 72-bit DDR4 3200 interface. The product supports three DDR4 data rates, DDR4-2666, DDR4-2933, and DDR4-3200, resulting in a significant reduction in the required number of host CPU or SoC pins per DDR4 memory channel, allows for more memory channels and therefore increases the memory bandwidth available. The SMC 1000 8x25G also features an innovative low latency design which results in memory systems using the product to have virtually identical bandwidth and latency performance to comparable LRDIMM products. The SMC 1000 8x25G combines data and address into one unified chip compared to LRDIMM which utilizes an RCD buffer and separate data buffers. This device is a foundational building block for a wide range of OMI memory applications. These include Differential Dual-Inline Memory Module (DDIMM) applications such as standard height 1U DDIMMs with capacities from 16 GB to 128 GB and double height 2U DDIMMs with capacities beyond 256 GB. SMC 1000 8x25G also supports chip down applications to off the shelf Registered DIMMs (RDIMM) and NVDIMM-N devices. SMC 1000 8x25G integrates an on-chip processor that performs control path and monitoring functions such as initialization, temperature monitoring, and diagnostics. The device supports manufacturing test operations of attached DRAM memory. Microchip’s Trusted Platform support, including hardware root-of-trust ensures device and firmware authenticity and supports secure firmware update. Specifications SMC 1000 8x25G OMI Interface 1x8, 1x4 support OIF-28G-MR Up to 25.6 Gbps Link Rate Dynamic low power modes DDR4 Memory Interface x72 bit DDR4-3200, 2933, or 2666 MT/s memory support Supports up to 4 ranks Supports up to 16 GBit memory devices 3D stacked memory support Persistent Memory Support Support for NVDIMM-N module Support for NVDIMM-N modules Intelligent Firmware Open Source Firmware On-board processor provides DDR/OMI initialization, and in-band temperature and error monitoring ChipLink GUI Security and Data Protection Hardware root-of-trust, secure boot, and secure update Single symbol correction/double symbol detection ECC Memory scrub with auto correction on errors Peripherals Support Support for SPI, I²C, GPIO, UART and JTAG/EJTAG Small Package and Low-Power Power optimized 17 mm x 17 mm package ——Source:Microsemi
    VIEW MORE
  • CEVA and Immervision Enter into Strategic Partnership for Advanced Image Enhancement Technologies

    CEVA and Immervision Enter into Strategic Partnership for Advanced Image Enhancement Technologies

    2019-08-09
    Partnership includes $10 million technology investment from CEVA, securing exclusive licensing rights to Immervision's patented image processing and sensor fusion software portfolio for wide-angle cameras, which are broadly used in surveillance, smartphone, automotive, robotics and consumer applications MOUNTAIN VIEW, Calif. and MONTREAL, Aug. 6, 2019 /PRNewswire/ -- CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies, today announced that it entered into a strategic partnership agreement with privately owned Immervision, Inc. of Montreal, Canada. A developer and licensor of wide-angle lenses and image processing technologies, Immervision's patented image enhancement algorithms and software technologies deliver dramatic improvements in image quality and remove the inherent distortions associated with the use of wide-angle cameras, particularly at the edges of the frame. Immervision's technologies have shipped in more than 50 million devices to date through its broad customer base, which includes Acer, Dahua, Garmin, Hanwha, Lenovo, Motorola, Quanta, Sony and Vivotek. Under the partnership agreement, CEVA made a $10 million technology investment to secure exclusive licensing rights to Immervision's advanced portfolio of patented wide-angle image processing technology and software. This includes real-time adaptive dewarping, stitching, image color and contrast enhancement, and electronic image stabilization. CEVA will also license Immervision's Data-in-Picture proprietary technology, which integrates within each video frame fused sensory data, such as that offered by Hillcrest Labs (a business recently acquired by CEVA). This adds contextual information to each frame that enables better image quality, video stabilization and accurate machine vision in AI applications. The companies will also collaborate in licensing full end-to-end solutions comprised of Immervision's patented wide-angle Panomorph optical lens design and the complementary image enhancement software. Immervision's hardware-agnostic software portfolio will continue to be offered for all System-on-Chip (SoC) platforms containing a GPU (Graphics Processing Unit) and in a power-optimized version for SoCs containing the CEVA-XM4 or CEVA-XM6 intelligent vision DSPs. Along with Immervision's software, CEVA also offers a broad range of other computer vision and AI software technologies, such as the CEVA Deep Neural Network (CDNN) - a neural networks graph compiler, the CEVA-SLAM software development kit, and the CEVA-CV optimized computer vision software library. Gideon Wertheizer, CEO of CEVA, commented: "This strategic partnership and technology investment with Immervision provides CEVA with a significant market advantage for the fast growing wide-angle camera market, particularly in smartphones, surveillance, ADAS and robotics. Through the combination of Immervision's imaging technologies and CEVA's vision and AI software technologies, ...
    VIEW MORE
  • NVIDIA @ SIGGRAPH 2019: NV to Enable 30-bit OpenGL Support on GeForce/Titan Cards

    NVIDIA @ SIGGRAPH 2019: NV to Enable 30-bit OpenGL Support on GeForce/Titan Cards

    2019-08-05
    Kicking off Last week is SIGGRAPH, the annual North American professional graphics gathering that sees everyone from researchers to hardware vendors come together to show off new ideas and new products. Last year’s show ended up being particularly important, as NVIDIA used the show as a backdrop for the announcement of their Turing graphics architecture. This year’s NVIDIA presence is going to be far more low-key – NVIDIA doesn’t have any new hardware this time – but the company is still at the show with some announcements. Diving right into matters then, this year NVIDIA has an announcement that all professional and prosumer users will want to take note of. At long last, NVIDIA is dropping the requirement to use a Quadro card to get 30-bit (10bpc) color support on OpenGL applications; the company will finally be extending that feature to GeForce and Titan cards as well. Dubbed their Studio Driver: SIGGRAPH Edition, NVIDIA’s latest driver will eliminate the artificial restriction that prevented OpenGL applications from drawing in 30-bit color. For essentially all of the company’s existence, NVIDIA has restricted this feature to their professional visualization Quadro cards in order to create a larger degree of product segmentation between the two product families. With OpenGL (still) widely used for professional content creation applications, this restriction didn’t prevent applications like Photoshop from running on GeForce cards, but it kept true professional users from using it with the full, banding-free precision that the program (and their monitors) were capable of. So for the better part of 20 years, it has been one of the most important practical reasons to get a Quadro card over a GeForce card, as while it’s possible to use 30-bit color elsewhere (e.g. DirectX), it was held back in a very specific scenario that impacted content creators. But with this latest Studio Driver, that’s going away. NVIDIA’s Studio drivers, which can be installed on any Pascal or newer GeForce/Titan card – desktop and mobile – will no longer come with this 30-bit restriction. It will be possible to use 30-bit color anywhere that the application supports it, including OpenGL applications. To be honest, this wasn’t a restriction I was expecting NVIDIA to lift any time soon. Rival AMD has offered unrestricted 30-bit color support for ages, and it has never caused NVIDIA to flinch. NVIDIA’s official rationale for all of this feels kind of thin – it was a commonly requested feature since the launch of the Studio drivers, so they decided to enable it – but as their official press release notes, working with HDR material pretty much requires 30-bit color; so it’s seemingly no longer a feature NVIDIA can justify restricting from Quadro cards. Still, I suppose one shouldn’t look a gift horse too closely in the mouth. Otherwise, at this point I’m not clear on whether this is going to remain limited to the Studio drivers, or will come to the regular “game ready” GeForce dr...
    VIEW MORE
  • Toposens Launches TS3 Ultrasonic Sensor

    Toposens Launches TS3 Ultrasonic Sensor

    2019-08-02
    Toposens recently announced the release of their current flag-ship product TS3, a 3D ultrasonic sensor suitable for a wide range of applications in the autonomous systems market that require a strong need for reliable object detection and situational awareness. In comparison to common ultrasonic sensors, which usually measure the distance only to the closest reflecting surface, Toposens’ new 3D sensors achieve a wide field of view of up to 160° and provide simultaneous 3D measurements for multiple objects within the scanning area. The operation thus mimics the echolocation techniques used by bats and dolphins for navigation and orientation in the wild. The new TS3 sensor combines carefully selected hardware components with proprietary signal processing algorithms. It is ideally suited for indoor robotic navigation and object avoidance. Systems benefit from its real-time processing capabilities while keeping data transmission bandwidth and power consumption low, which is especially important for battery powered robots. Exemplary use cases include home cleaning robots and delivery/service robots. The TS3 sensor enables them to reliably map an environment with minimal processing power and to localize themselves in predefined maps to execute complex path planning algorithms. TS3 sensors perform independently of ambient light conditions and are even capable of detecting mirroring and transparent surfaces; adding an additional layer of safety where optical sensors tend to fail. For even higher reliability the generated 3D point cloud can easily be fused with data from other system-relevant sensors. The new TS3 sensor is an embedded sensor system that sends out ultrasound waves in a frequency range inaudible by humans. An array of microphones subsequently records the echoes from all objects in the sensor’s vicinity and computes their location in a 3-dimensional space. It therefore creates an entirely new way of ultrasonic sensing for autonomous systems. “Because our new “Bat Vision” TS3 sensor is compact, affordable and integration-ready,” explains Tobias Bahnemann, Managing Director of Toposens. “Engineers can easily add it to their sensor stacks to replace or complement their existing optical sensing systems, providing both redundancy and an improved level of accuracy compared to standard ultrasonic sensors in various autonomous navigation applications.” The core technology is based on the Toposens’ SoundVision1TM chip making the sensor system easily adaptable to a variety of product designs. This qualifies the TS3 as the perfect technology platform for developing next-level mass market vehicles in robotic and even automotive use cases like automated parking and next-level ADAS functionality. Technical specifications include a detection range of up to 5 meters and a scan rate of approximately 28 Hz. The TS3 returns up to 200 points per second with each 3D point corresponding to the cartesian coordinates and an additional volume measurement of the ultr...
    VIEW MORE
  • Aspinity smart-sensing edge architecture tackles power- and data-efficiency problems

    Aspinity smart-sensing edge architecture tackles power- and data-efficiency problems

    2019-07-29
    Aspinity, a semiconductor startup funded by Alexa Fund and others, and based in Pittsburgh, USA, recently announced the first smart-sensing edge architecture to tackle the power- and data-efficiency problems in the billions of battery-powered consumer electronics, smart home systems, and predictive-maintenance devices on which we increasingly rely. Aspinity announced its reconfigurable analog modular processor (RAMP) platform, an ultra-low power, analog processing platform, that overcomes the power and data handling challenges in battery-operated, always-on sensing devices. Incorporating machine learning into an analog neuromorphic processor, Aspinity’s RAMP platform enables 10x power savings over older architectures. Devices can now run for months or years, instead of days or weeks, without battery recharge or replacement. Smart-sensing edge architecture Elaborating on Aspinity’s smart-sensing edge architecture, Tom Doyle, CEO and founder, said that Aspinity offers a fundamentally new architectural approach to conserving power and data resources in always-on devices. The scalable and programmable RAMP technology incorporates powerful machine learning into an ultra-low power analog neuromorphic processor that can detect unique events from background noise before the data is digitized. By directly analyzing the analog raw sensor data for what’s important, the RAMP chip eliminates the higher-power processing of irrelevant data. System designers can now stop sacrificing features and accuracy for longer battery life. Aspinity’s analyze-first approach reduces the power consumption of always-sensing systems by up to 10x and data requirements by up to 100x. The RAMP chip’s analog blocks can be reprogrammed with application-specific algorithms for detection of different events and different types of sensor input. For example, designers can use a RAMP chip for always-listening applications, where the chip conserves system power by keeping the rest of the always-listening system in a low power sleep state, until a specific sound, such as voice or an alarm, has been detected. Unlike the other sensor edge solutions for voice activity detection, the RAMP chip also supports voice-first devices by storing the pre-roll data required by wake word engines. For industrial applications, designers can use a RAMP chip to sample and select only the most important data points from thousands of points of sensor data: compressing vibration data into a reduced number of frequency/energy pairs and dramatically decreasing the amount of data collected and transmitted for analysis. This is the USP for the RAMP platform. With so many ways to program a RAMP core, as well as broad algorithm support for different types of analysis and output, the RAMP chip uniquely enables a whole new generation of smaller, lower-cost, more power- and data-efficient, battery-operated, always-on devices for consumer, IoT, industrial and biomedical applications. Much longer battery life Short batter...
    VIEW MORE
  • OptimalPlus launches Lifecycle Analytics Solution for ADAS

    OptimalPlus launches Lifecycle Analytics Solution for ADAS

    2019-07-26
    OptimalPlus, a specialist in lifecycle analytics solutions, has launched a new Lifecycle Analytics Solution for Advanced Drivers Assistance System (ADAS) cameras. The offering provides manufacturers of ADAS with real-time data insights based on big data and machine learning to optimise production and increase product quality, according to the company. ADAS cameras are electro-optical systems that aid vehicle drivers and are intended to increase car and road safety and are a cornerstone of the technologies being developed to support autonomous and semi-autonomous vehicles. The manufacturing of ADAS cameras is a highly complicated and costly process, involving the integration of multiple specialised and sensitive sensors in a series of irreversible processes to create a high-performance system. This has made it difficult for manufacturers to detect defective products during the assembly process, resulting in unpredictable performance that remains undetermined until the camera system is fully assembled and tested. As a result, new designs of ADAS camera manufacturers struggle with extremely high scrap rates of around 25% on already expensive systems, leading to a high cost per unit and impeding the rate automakers can integrate the systems into their cars. “Automakers need a holistic solution that can provide the big picture about the health of a vehicle,” said Dan Glotter, CEO of OptimalPlus. “We are enabling automakers to take full advantage of the potential offered by new technologies while removing concerns about manufacturing quality products cost-efficiently. The next two waves facing the automotive industry are autonomy and electrification, and both are going to bring enormous manufacturing complexities, requiring new analytics methods for faster product ramp, reduced scrap rates, and improved quality & reliability.” Assembling ADAS cameras relies on a complicated supply chain to provide electronic and optical components from different geographical locations, all with different methods of ensuring and monitoring reliability, and with manufacturers relying on separate silos of product data and information, it is exceedingly difficult to ensure that these components will perform up to the required standards. OptimalPlus looks to address these issues by providing much greater visibility throughout the supply chain, connecting supplier data to field performance, enabling a full overview of production, increasing efficiency and enabling preemptive actions to find problematic products earlier in the manufacturing cycle and preventing unreliable products from being deployed or removing them in real-time from the factory floor, reducing scrap rates and avoiding costly recalls. “As systems such as ADAS cameras, that are the backbone of autonomous vehicles, become critical for safe driving, guaranteeing system quality is only going to become more important. As such OEMs are going to demand accountability from their technology providers on all suppl...
    VIEW MORE
first page 1 2 3 last page

A total of 3 pages

leave a message welcome to fic
If you have any problem when using the website or our products, please write down your comments or suggestions, we will answer your questions as soon as possible!Thank you for your attention!

Home

Solutions

about

contact