• 2019 Shanghai International Trade Fair for Automotive Parts, Equipment and Service Suppliers

    2019 Shanghai International Trade Fair for Automotive Parts, Equipment and Service Suppliers

    Facts & figures Automechanika Shanghai is part of the 'Automechanika' brand. It welcomes industry players from the entire automotive supply chain to participate and expand their business on a global scale. Fair facts Date 3 – 6 December 2019 Venue National Exhibition and Convention Center (Shanghai), China Exhibition space 360,000 sqm No. of visitors (estimate) 160,000 No. of exhibitors (estimate) 6,320 Product groups Parts & components Components for conventional drive systems Chassis Body Standard mechanical parts Interior Exterior Charging accessories 12 volt Regenerated, restored and renewed parts for cars and utility vehicles External vehicle air quality and exhaust gas treatment New materials Electronics & connectivity Engine electronics Vehicle lighting Electrical system Comfort electronics Human machine interface (HMI) Connectivity Internet of things Accessories & customising General accessories for motor vehicles Technical customising Visual customising Infotainment and Car IT Special vehicles, equipment, assemblies and modifications Car trailers and small utility vehicle trailers, spare and accessory parts for trailers Merchandising Diagnostics & maintenance Workshop equipment for repair and maintenance Tools Digital maintenance Vehicle diagnostics Maintenance and repair of vehicle superstructures Towing equipment Workshop equipment for repair and maintenance for alternative drive concepts Fastening and bonding solutions Waste disposal and recycling Workshop safety and ergonomic workshops Workshop and dealership equipment Oils and lubricants Technical fluids Workshop concepts Dealer & workshop management Workshop / dealership / filling station planning and construction Dealer, sales and service management Digital marketing Customer data management Online presence E-commerce and mobile payment Basic and advanced training and professional development Workshop and dealership marketing Online service providers and vehicle/parts/service marts Economic regeneration, research, consulting, cluster initiatives Car wash & care Washing Vehicle care Vehicle preparation and detailing Water reclamation, water treatment Filling station equipment Alternative drive systems & fuels Energy storage Alternative fuels Complementary products Vehicle concepts Resources Charging and tank technologies and systems New workshop technologies REIFEN (Tyres & Wheels) Tyres Wheels and rims Tyre/wheel repair and disposal Used tyres and wheels Tyre/wheel management and systems Sales equipment and storage of tyres Accessories for tyres, wheels and installation Body & paint Bodywork repairs Paintwork and corrosion protection Smart repairs for paintwork, metal parts, plastic parts, windows, headlights, rims New materials Mobility as a service & autonomous driving Mobility services Automated driving Fleet management / leasing / corporate mobility Others Industry institutions Publishers Official Website Learn more Visitor Registrat...
  • How has 5G changed our lives?

    How has 5G changed our lives?

    It’s still too early to experience the real benefits of 5G. Current 5G deployments are limited to just a few neighborhoods in the largest cities, and even there it’s difficult to find a stable 5G signal. None of the truly transformative changes in tech thanks to 5G are possible with such spotty coverage. You won’t be waiting too long. The major wireless carriers all expect to have a sizeable number of customers on 5G networks by 2025, and the tech industry is already developing next-generation technologies to take advantage of an always-on, super high-speed connection. What will this future look like? We spoke with close to a dozen futurists and technology entrepreneurs to get their predictions on what 5G will look like in the year 2025. From smart cities to smarter homes, to significant advances in artificial intelligence — a lot is about to change. SMARTER CARS Our cars will become smarter, as they’ll be able to ‘talk’ with one another and these traffic management systems at large. Expect by 2025 to have fewer ‘cars’ on our city roads,Adoption of self-driving vehicles, 5G, robot taxis, and a growing gig economy will combine to change how we see cars. The cars that do remain on the road will have more sensors than ever. These sensors won’t just help you park, stay in your lane, or avoid accidents anymore. With 5G, they’ll be interconnected. This opens up a whole new world of possibilities which will all make driving safer, quicker, and less stressful. SMARTER CITIES AND SMARTER HOMES Traffic congestion is worsening as cities grow. Statistics show that average commute time continues to increase, and will continue with more cars on the road. There is a significant need for traffic management, according to experts. Robust 5G services may soon enable decidedly futuristic-sounding applications. A.I.-assisted traffic management systems and just-in-time communications will transform the way we move within our cities. Such a system could theoretically make traffic jams a thing of the past. Artificial intelligence would help manage traffic on a regional level. 5G and A.I.-enabled traffic control together could proactively adjust speeds on highways to keep cars moving or automatically divert traffic around incidents. Cars entering the road could be metered, helping to control traffic flow. Going further, smart power grids will improve energy efficiencies, and improved security systems will keep us safer than ever before. The bandwidth requirements for these applications are far too high for existing network infrastructure, but small cell technologies may soon enable a veritable world of possibilities. Smart homes will also get better. Bandwidth has always been an issue. By addressing the coverage issues that happen with Bluetooth, Wi-Fi, or other communications technologies, 5G allows more devices to go online. FASTER SPEEDS ON YOUR PHONE 4G’s fast data speeds jump-started the app revolution. It’s still not fast enough to handle truly data-inten...
  • New Smart Memory Controller, Breaking Through the Memory Bandwidth Bottleneck

    New Smart Memory Controller, Breaking Through the Memory Bandwidth Bottleneck

    Microchip's new SMC 1000 8x25G serial memory controller enables CPUs and other compute-centric SoCs to utilize four times the memory channels of parallel attached DRAM within the same package footprint compared to DDR4. The SMC 1000 8x25G enables higher memory bandwidth and media independence for High Performance Computing (HPC), big data, artificial intelligence and machine learning compute-intensive applications with ultra-low latency. The SMC 1000 8x25G interfaces to the CPU via a narrow 8-lane differential Open Memory Interface (OMI)-compliant 25 Gbps interface and bridges to memory via a wide 72-bit DDR4 3200 interface. The product supports three DDR4 data rates, DDR4-2666, DDR4-2933, and DDR4-3200, resulting in a significant reduction in the required number of host CPU or SoC pins per DDR4 memory channel, allows for more memory channels and therefore increases the memory bandwidth available. The SMC 1000 8x25G also features an innovative low latency design which results in memory systems using the product to have virtually identical bandwidth and latency performance to comparable LRDIMM products. The SMC 1000 8x25G combines data and address into one unified chip compared to LRDIMM which utilizes an RCD buffer and separate data buffers. This device is a foundational building block for a wide range of OMI memory applications. These include Differential Dual-Inline Memory Module (DDIMM) applications such as standard height 1U DDIMMs with capacities from 16 GB to 128 GB and double height 2U DDIMMs with capacities beyond 256 GB. SMC 1000 8x25G also supports chip down applications to off the shelf Registered DIMMs (RDIMM) and NVDIMM-N devices. SMC 1000 8x25G integrates an on-chip processor that performs control path and monitoring functions such as initialization, temperature monitoring, and diagnostics. The device supports manufacturing test operations of attached DRAM memory. Microchip’s Trusted Platform support, including hardware root-of-trust ensures device and firmware authenticity and supports secure firmware update. Specifications SMC 1000 8x25G OMI Interface 1x8, 1x4 support OIF-28G-MR Up to 25.6 Gbps Link Rate Dynamic low power modes DDR4 Memory Interface x72 bit DDR4-3200, 2933, or 2666 MT/s memory support Supports up to 4 ranks Supports up to 16 GBit memory devices 3D stacked memory support Persistent Memory Support Support for NVDIMM-N module Support for NVDIMM-N modules Intelligent Firmware Open Source Firmware On-board processor provides DDR/OMI initialization, and in-band temperature and error monitoring ChipLink GUI Security and Data Protection Hardware root-of-trust, secure boot, and secure update Single symbol correction/double symbol detection ECC Memory scrub with auto correction on errors Peripherals Support Support for SPI, I²C, GPIO, UART and JTAG/EJTAG Small Package and Low-Power Power optimized 17 mm x 17 mm package ——Source:Microsemi
  • CEVA and Immervision Enter into Strategic Partnership for Advanced Image Enhancement Technologies

    CEVA and Immervision Enter into Strategic Partnership for Advanced Image Enhancement Technologies

    Partnership includes $10 million technology investment from CEVA, securing exclusive licensing rights to Immervision's patented image processing and sensor fusion software portfolio for wide-angle cameras, which are broadly used in surveillance, smartphone, automotive, robotics and consumer applications MOUNTAIN VIEW, Calif. and MONTREAL, Aug. 6, 2019 /PRNewswire/ -- CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies, today announced that it entered into a strategic partnership agreement with privately owned Immervision, Inc. of Montreal, Canada. A developer and licensor of wide-angle lenses and image processing technologies, Immervision's patented image enhancement algorithms and software technologies deliver dramatic improvements in image quality and remove the inherent distortions associated with the use of wide-angle cameras, particularly at the edges of the frame. Immervision's technologies have shipped in more than 50 million devices to date through its broad customer base, which includes Acer, Dahua, Garmin, Hanwha, Lenovo, Motorola, Quanta, Sony and Vivotek. Under the partnership agreement, CEVA made a $10 million technology investment to secure exclusive licensing rights to Immervision's advanced portfolio of patented wide-angle image processing technology and software. This includes real-time adaptive dewarping, stitching, image color and contrast enhancement, and electronic image stabilization. CEVA will also license Immervision's Data-in-Picture proprietary technology, which integrates within each video frame fused sensory data, such as that offered by Hillcrest Labs (a business recently acquired by CEVA). This adds contextual information to each frame that enables better image quality, video stabilization and accurate machine vision in AI applications. The companies will also collaborate in licensing full end-to-end solutions comprised of Immervision's patented wide-angle Panomorph optical lens design and the complementary image enhancement software. Immervision's hardware-agnostic software portfolio will continue to be offered for all System-on-Chip (SoC) platforms containing a GPU (Graphics Processing Unit) and in a power-optimized version for SoCs containing the CEVA-XM4 or CEVA-XM6 intelligent vision DSPs. Along with Immervision's software, CEVA also offers a broad range of other computer vision and AI software technologies, such as the CEVA Deep Neural Network (CDNN) - a neural networks graph compiler, the CEVA-SLAM software development kit, and the CEVA-CV optimized computer vision software library. Gideon Wertheizer, CEO of CEVA, commented: "This strategic partnership and technology investment with Immervision provides CEVA with a significant market advantage for the fast growing wide-angle camera market, particularly in smartphones, surveillance, ADAS and robotics. Through the combination of Immervision's imaging technologies and CEVA's vision and AI software technologies, ...
  • NVIDIA @ SIGGRAPH 2019: NV to Enable 30-bit OpenGL Support on GeForce/Titan Cards

    NVIDIA @ SIGGRAPH 2019: NV to Enable 30-bit OpenGL Support on GeForce/Titan Cards

    Kicking off Last week is SIGGRAPH, the annual North American professional graphics gathering that sees everyone from researchers to hardware vendors come together to show off new ideas and new products. Last year’s show ended up being particularly important, as NVIDIA used the show as a backdrop for the announcement of their Turing graphics architecture. This year’s NVIDIA presence is going to be far more low-key – NVIDIA doesn’t have any new hardware this time – but the company is still at the show with some announcements. Diving right into matters then, this year NVIDIA has an announcement that all professional and prosumer users will want to take note of. At long last, NVIDIA is dropping the requirement to use a Quadro card to get 30-bit (10bpc) color support on OpenGL applications; the company will finally be extending that feature to GeForce and Titan cards as well. Dubbed their Studio Driver: SIGGRAPH Edition, NVIDIA’s latest driver will eliminate the artificial restriction that prevented OpenGL applications from drawing in 30-bit color. For essentially all of the company’s existence, NVIDIA has restricted this feature to their professional visualization Quadro cards in order to create a larger degree of product segmentation between the two product families. With OpenGL (still) widely used for professional content creation applications, this restriction didn’t prevent applications like Photoshop from running on GeForce cards, but it kept true professional users from using it with the full, banding-free precision that the program (and their monitors) were capable of. So for the better part of 20 years, it has been one of the most important practical reasons to get a Quadro card over a GeForce card, as while it’s possible to use 30-bit color elsewhere (e.g. DirectX), it was held back in a very specific scenario that impacted content creators. But with this latest Studio Driver, that’s going away. NVIDIA’s Studio drivers, which can be installed on any Pascal or newer GeForce/Titan card – desktop and mobile – will no longer come with this 30-bit restriction. It will be possible to use 30-bit color anywhere that the application supports it, including OpenGL applications. To be honest, this wasn’t a restriction I was expecting NVIDIA to lift any time soon. Rival AMD has offered unrestricted 30-bit color support for ages, and it has never caused NVIDIA to flinch. NVIDIA’s official rationale for all of this feels kind of thin – it was a commonly requested feature since the launch of the Studio drivers, so they decided to enable it – but as their official press release notes, working with HDR material pretty much requires 30-bit color; so it’s seemingly no longer a feature NVIDIA can justify restricting from Quadro cards. Still, I suppose one shouldn’t look a gift horse too closely in the mouth. Otherwise, at this point I’m not clear on whether this is going to remain limited to the Studio drivers, or will come to the regular “game ready” GeForce dr...
  • Toposens Launches TS3 Ultrasonic Sensor

    Toposens Launches TS3 Ultrasonic Sensor

    Toposens recently announced the release of their current flag-ship product TS3, a 3D ultrasonic sensor suitable for a wide range of applications in the autonomous systems market that require a strong need for reliable object detection and situational awareness. In comparison to common ultrasonic sensors, which usually measure the distance only to the closest reflecting surface, Toposens’ new 3D sensors achieve a wide field of view of up to 160° and provide simultaneous 3D measurements for multiple objects within the scanning area. The operation thus mimics the echolocation techniques used by bats and dolphins for navigation and orientation in the wild. The new TS3 sensor combines carefully selected hardware components with proprietary signal processing algorithms. It is ideally suited for indoor robotic navigation and object avoidance. Systems benefit from its real-time processing capabilities while keeping data transmission bandwidth and power consumption low, which is especially important for battery powered robots. Exemplary use cases include home cleaning robots and delivery/service robots. The TS3 sensor enables them to reliably map an environment with minimal processing power and to localize themselves in predefined maps to execute complex path planning algorithms. TS3 sensors perform independently of ambient light conditions and are even capable of detecting mirroring and transparent surfaces; adding an additional layer of safety where optical sensors tend to fail. For even higher reliability the generated 3D point cloud can easily be fused with data from other system-relevant sensors. The new TS3 sensor is an embedded sensor system that sends out ultrasound waves in a frequency range inaudible by humans. An array of microphones subsequently records the echoes from all objects in the sensor’s vicinity and computes their location in a 3-dimensional space. It therefore creates an entirely new way of ultrasonic sensing for autonomous systems. “Because our new “Bat Vision” TS3 sensor is compact, affordable and integration-ready,” explains Tobias Bahnemann, Managing Director of Toposens. “Engineers can easily add it to their sensor stacks to replace or complement their existing optical sensing systems, providing both redundancy and an improved level of accuracy compared to standard ultrasonic sensors in various autonomous navigation applications.” The core technology is based on the Toposens’ SoundVision1TM chip making the sensor system easily adaptable to a variety of product designs. This qualifies the TS3 as the perfect technology platform for developing next-level mass market vehicles in robotic and even automotive use cases like automated parking and next-level ADAS functionality. Technical specifications include a detection range of up to 5 meters and a scan rate of approximately 28 Hz. The TS3 returns up to 200 points per second with each 3D point corresponding to the cartesian coordinates and an additional volume measurement of the ultr...
  • Aspinity smart-sensing edge architecture tackles power- and data-efficiency problems

    Aspinity smart-sensing edge architecture tackles power- and data-efficiency problems

    Aspinity, a semiconductor startup funded by Alexa Fund and others, and based in Pittsburgh, USA, recently announced the first smart-sensing edge architecture to tackle the power- and data-efficiency problems in the billions of battery-powered consumer electronics, smart home systems, and predictive-maintenance devices on which we increasingly rely. Aspinity announced its reconfigurable analog modular processor (RAMP) platform, an ultra-low power, analog processing platform, that overcomes the power and data handling challenges in battery-operated, always-on sensing devices. Incorporating machine learning into an analog neuromorphic processor, Aspinity’s RAMP platform enables 10x power savings over older architectures. Devices can now run for months or years, instead of days or weeks, without battery recharge or replacement. Smart-sensing edge architecture Elaborating on Aspinity’s smart-sensing edge architecture, Tom Doyle, CEO and founder, said that Aspinity offers a fundamentally new architectural approach to conserving power and data resources in always-on devices. The scalable and programmable RAMP technology incorporates powerful machine learning into an ultra-low power analog neuromorphic processor that can detect unique events from background noise before the data is digitized. By directly analyzing the analog raw sensor data for what’s important, the RAMP chip eliminates the higher-power processing of irrelevant data. System designers can now stop sacrificing features and accuracy for longer battery life. Aspinity’s analyze-first approach reduces the power consumption of always-sensing systems by up to 10x and data requirements by up to 100x. The RAMP chip’s analog blocks can be reprogrammed with application-specific algorithms for detection of different events and different types of sensor input. For example, designers can use a RAMP chip for always-listening applications, where the chip conserves system power by keeping the rest of the always-listening system in a low power sleep state, until a specific sound, such as voice or an alarm, has been detected. Unlike the other sensor edge solutions for voice activity detection, the RAMP chip also supports voice-first devices by storing the pre-roll data required by wake word engines. For industrial applications, designers can use a RAMP chip to sample and select only the most important data points from thousands of points of sensor data: compressing vibration data into a reduced number of frequency/energy pairs and dramatically decreasing the amount of data collected and transmitted for analysis. This is the USP for the RAMP platform. With so many ways to program a RAMP core, as well as broad algorithm support for different types of analysis and output, the RAMP chip uniquely enables a whole new generation of smaller, lower-cost, more power- and data-efficient, battery-operated, always-on devices for consumer, IoT, industrial and biomedical applications. Much longer battery life Short batter...
  • OptimalPlus launches Lifecycle Analytics Solution for ADAS

    OptimalPlus launches Lifecycle Analytics Solution for ADAS

    OptimalPlus, a specialist in lifecycle analytics solutions, has launched a new Lifecycle Analytics Solution for Advanced Drivers Assistance System (ADAS) cameras. The offering provides manufacturers of ADAS with real-time data insights based on big data and machine learning to optimise production and increase product quality, according to the company. ADAS cameras are electro-optical systems that aid vehicle drivers and are intended to increase car and road safety and are a cornerstone of the technologies being developed to support autonomous and semi-autonomous vehicles. The manufacturing of ADAS cameras is a highly complicated and costly process, involving the integration of multiple specialised and sensitive sensors in a series of irreversible processes to create a high-performance system. This has made it difficult for manufacturers to detect defective products during the assembly process, resulting in unpredictable performance that remains undetermined until the camera system is fully assembled and tested. As a result, new designs of ADAS camera manufacturers struggle with extremely high scrap rates of around 25% on already expensive systems, leading to a high cost per unit and impeding the rate automakers can integrate the systems into their cars. “Automakers need a holistic solution that can provide the big picture about the health of a vehicle,” said Dan Glotter, CEO of OptimalPlus. “We are enabling automakers to take full advantage of the potential offered by new technologies while removing concerns about manufacturing quality products cost-efficiently. The next two waves facing the automotive industry are autonomy and electrification, and both are going to bring enormous manufacturing complexities, requiring new analytics methods for faster product ramp, reduced scrap rates, and improved quality & reliability.” Assembling ADAS cameras relies on a complicated supply chain to provide electronic and optical components from different geographical locations, all with different methods of ensuring and monitoring reliability, and with manufacturers relying on separate silos of product data and information, it is exceedingly difficult to ensure that these components will perform up to the required standards. OptimalPlus looks to address these issues by providing much greater visibility throughout the supply chain, connecting supplier data to field performance, enabling a full overview of production, increasing efficiency and enabling preemptive actions to find problematic products earlier in the manufacturing cycle and preventing unreliable products from being deployed or removing them in real-time from the factory floor, reducing scrap rates and avoiding costly recalls. “As systems such as ADAS cameras, that are the backbone of autonomous vehicles, become critical for safe driving, guaranteeing system quality is only going to become more important. As such OEMs are going to demand accountability from their technology providers on all suppl...
  • Comment: Bluetooth as the smart building protocol of choice

    Comment: Bluetooth as the smart building protocol of choice

    With more than 374 million Bluetooth smart building devices expected to ship annually by 2023 there’s plenty of opportunity for developers and engineers to continue to join the Bluetooth mesh networking revolution. What is it exactly that is making everyone go mad for mesh? The new mesh capability enables many-to-many device communications and is optimised for creating large-scale device networks. Since its release in July 2017, Bluetooth mesh has become the clear choice for large-scale device networks. Already, more than 200 products with mesh networking capability have been qualified from leading silicon, stack, component, and end-product vendors. At the Bluetooth SIG we are seeing a tremendous amount of momentum behind the mesh networking capabilities of Bluetooth, including large players in the Smart Home market like Alibaba and Xiaomi making strategic platform decisions to support Bluetooth mesh networking for developers using their smart home platform. Commercial and industrial environments demand a solution that can reliably and securely connect tens, hundreds or even thousands of devices within a robust, low-latency, large-scale device network. Indeed, your choice of smart building protocols to automate whole facilities or enable wireless sensor networks at commercial scale can make or break your solution or product launch. Reliability The reliability of a mesh network is judged on its ability to deliver a message from one device to another. Bluetooth mesh is no exception and uses two forms of message relay to ensure uninterrupted message delivery, including; peer-to-peer communications and multipath message relay. But what exactly are these two methods and how do they work? With peer-to-peer messaging in Bluetooth mesh, you can expect all nodes to communicate directly with one another. There is no centralised hub, which means if one node breaks down there is no domino effect and no single point of failure to your network. Multipath messaging enables Bluetooth mesh to use a managed flood message relay architecture that is self-healing for reliable message delivery. If a pathway is blocked, the message can simply take an alternative route, allowing wireless installations to achieve the trouble-free performance and scalability of wired systems. Security One of the most discussed issues related to the smart building applications is security. Does Bluetooth mesh networking have a security architecture that is designed to address the pressing concerns of companies deploying a large-scale wireless device network? The answer is yes. To ensure all mesh messages are encrypted and authenticated, all communication is secured using AES-CCM with 128-bit keys – essentially a cryptographic algorithm that requires secure confidentiality and government-grade authentication. Don’t assume this all to be on one layer either. At the heart of Bluetooth mesh security are three types of security keys— device keys, application keys, and network keys. These keys p...
  • Socionext encoder generates live 8K video streams

    Socionext encoder generates live 8K video streams

    Socionext, the video data processing specialist, has developed a streaming encoder unit – the “e8 – which generates live 8K video streams for expanding the use of HD high-quality video. The e8 is equipped with Socionext’s multi-channel, real-time encoder SoC, the MB86M31. It is capable of real-time encoding of 8K/60p video with HEVC/H.265, and enables live streaming of large, high-definition video data through IP networks. It also supports the 4:2:2 10-bit color profile required in high-quality professional video shooting, to deliver ultra-vivid, life-like video images. The e8 comes with a 12G-SDI interface (4ch) for easy connection to a wide range of 8K cameras. It supports a number of streaming protocols including HLS and RTP. With an intuitive user interface for configuring system settings, the e8 also enables users to develop a high-quality and reliable 8K live streaming system quickly and efficiently. Socionext has tested and verified operations in various environments by combining the e8 and the company’s “s8” media player. By connecting these two devices over the network, users will be able to attain an advanced video streaming solution where “high-definition”, “high-quality” and “real-time” are the essential requirements. The applications include viewing of sports and other events in the public space, and video communications in schools, commercial enterprises and other organizations. The solution can also facilitate live 8K VR streaming for the most immersive user experience possible. It will be available worldwide in September 2019.
first page 1 2 3 4 5 6 7 8 9 10 last page

A total of 12 pages

leave a message welcome to fic
If you have any problem when using the website or our products, please write down your comments or suggestions, we will answer your questions as soon as possible!Thank you for your attention!