News
News
  • Time to Take 'Hippocratic Oath' for Engineering

    Time to Take 'Hippocratic Oath' for Engineering

    2019-08-27
    To develop smart cities that serve people, planet and society, engineers might consider adopting a Hippocratic Oath for Engineering to guide their work. Cities are becoming smarter as we add sensors, extract and combine data, and optimize processes. Smart cities promise to improve our lives with more comfort, services, safety, efficiency, connectivity. Transport might become instantaneous and flawless. Energy could be used and produced as efficiently and sustainably as possible. Crime might be detected when or even before it happens. Cities might run with ease because processes become interlinked. People may not have to worry about spending too much money or doing taxes, because the city takes care of it for them. Our every need could be met through micro-advertising based on emotions, (predicted) behavior and buying histories. Although technology has the potential to offer lots of benefits, we might want to be smart about what we build and how we do it. As pointed out in this EE Times article entitled, "Wanted: the Human Side of Technologies," there might be a side to smart cities that we should not overlook. Shaping technology, which is becoming more and more ingrained with our environment and our lives, is something that should not be done lightly. People, nature and society need to be at the center of the decision-making process yet are often forced to take a back seat when engineering decisions are made only from technical, business, economic or governance perspectives. In the heat of things, the undervalued, implicit and invaluable parts of life are easily overlooked or discarded. We might end up with cities that are too one-dimensional for life. Shared responsibility Shaping and applying technology should be a shared responsibility. An ethical practice and open collaboration can help us to develop smart cities that serve people, nature and society and their future. Engineers have a duty to consider the consequences of their work on every level possible. Do we perhaps need a Hippocratic Oath for Engineering? A city is more than its buildings, shops and streets. It comes alive from the people in it. Great cities can make people come alive too, just like nature, a great book, education, family or something wonderful can. People live their lives full of dreams, successes, doubt, failures, mistakes, contradictions, spontaneity, dilemmas, thoughts, learning, sorrow, joy, et cetera. With a smart approach we might cater to this full spectrum of life. Imagine a smart city that helps you to grow and learn in a natural way. Maybe public spaces provide subtle interactions that help people to feel happy and calm. How about a smart city that stimulates people's creativity and acts as a shared stage for people to share and mix their art. Imagine streets that help you stay healthy and fit. Think about public spaces that stimulate meaningful conversations between strangers. How about sidewalks that help you discover your city? Imagine a smart city that ada...
    VIEW MORE
  • How and why electric vehicles will change the way cars look

    How and why electric vehicles will change the way cars look

    2019-08-23
    Once a novelty, electric vehicles (EV) have moved into the mainstream thanks to high-profile companies like Tesla and best-sellers like the Nissan Leaf. These cars are feature-packed and technology heavy, but the innards aren’t the only part of the car that’s changing. These cars look different on the outside too. The changes to the inside are driving some of those made to the exterior. Electric power requires fewer moving mechanical parts but requires more electronic parts. To account for these changes, vehicle designers are reimagining what a car looks like, and what a car can do. While many of these changes are evolutionary, some are quite revolutionary, too. Grille-less fronts, ‘frunks,’ and sensor-coated bumpers It’s easy to spot an EV approaching you on the road because the front end of electric cars looks different from gas-powered vehicles. Fewer moving mechanical parts are needed in an EV – and in most cars a majority of those parts are found up front within the engine bay. All that freed up behind the front wheels is available for use elsewhere. “With added freedom in the absence of an engine up front, we can expect manufacturers to get really creative in complete redesigns of the front end,” CARiD product training director and former automobile engineer Richard Reina says. One of the most noticeable differences between electric and combustion engine-powered vehicles is the elimination of the grille. Electric powered-cars require far less ventilation than is usually required for a radiator. While these cars still need to ventilate heat, it’s nowhere as much as that produced by a traditional combustion engine. It also doesn’t require oil lubrication, which means designers can eliminate a large portion of lubrication systems as well. Reina thinks that this could lead to more EV manufacturers adding a “frunk,” and providing additional storage space in the front of the car. But could cars lose their front ends altogether? Some might. Look at Volkswagen’s prototype bus. Other companies like Toyota are also planning EVs with smaller front ends. The bumper itself, and its importance to the car, will also change. With more autonomous driving features becoming more common, the front bumpers (and rear bumper) will be positively lined with sensors. Side view mirrors will also disappear or shrink considerably, replaced instead with cameras (if the law finally allows them. With LED light technology improving, the big headlights of the past will also morph into small slits or dots, perhaps built into the hood or front bumper. On the inside: Roomy and tech heavy The elimination of moving parts will also allow EV manufacturers to increase the size of the interior of the vehicle without the need to increase its overall size. This will mean plentiful legroom for all passengers, as well as a large trunk. With autonomous driving taking over during the next decade, the standard front-facing seating arrangement could very well be no more. Since the car is dr...
    VIEW MORE
  • How has 5G changed our lives?

    How has 5G changed our lives?

    2019-08-19
    It’s still too early to experience the real benefits of 5G. Current 5G deployments are limited to just a few neighborhoods in the largest cities, and even there it’s difficult to find a stable 5G signal. None of the truly transformative changes in tech thanks to 5G are possible with such spotty coverage. You won’t be waiting too long. The major wireless carriers all expect to have a sizeable number of customers on 5G networks by 2025, and the tech industry is already developing next-generation technologies to take advantage of an always-on, super high-speed connection. What will this future look like? We spoke with close to a dozen futurists and technology entrepreneurs to get their predictions on what 5G will look like in the year 2025. From smart cities to smarter homes, to significant advances in artificial intelligence — a lot is about to change. SMARTER CARS Our cars will become smarter, as they’ll be able to ‘talk’ with one another and these traffic management systems at large. Expect by 2025 to have fewer ‘cars’ on our city roads,Adoption of self-driving vehicles, 5G, robot taxis, and a growing gig economy will combine to change how we see cars. The cars that do remain on the road will have more sensors than ever. These sensors won’t just help you park, stay in your lane, or avoid accidents anymore. With 5G, they’ll be interconnected. This opens up a whole new world of possibilities which will all make driving safer, quicker, and less stressful. SMARTER CITIES AND SMARTER HOMES Traffic congestion is worsening as cities grow. Statistics show that average commute time continues to increase, and will continue with more cars on the road. There is a significant need for traffic management, according to experts. Robust 5G services may soon enable decidedly futuristic-sounding applications. A.I.-assisted traffic management systems and just-in-time communications will transform the way we move within our cities. Such a system could theoretically make traffic jams a thing of the past. Artificial intelligence would help manage traffic on a regional level. 5G and A.I.-enabled traffic control together could proactively adjust speeds on highways to keep cars moving or automatically divert traffic around incidents. Cars entering the road could be metered, helping to control traffic flow. Going further, smart power grids will improve energy efficiencies, and improved security systems will keep us safer than ever before. The bandwidth requirements for these applications are far too high for existing network infrastructure, but small cell technologies may soon enable a veritable world of possibilities. Smart homes will also get better. Bandwidth has always been an issue. By addressing the coverage issues that happen with Bluetooth, Wi-Fi, or other communications technologies, 5G allows more devices to go online. FASTER SPEEDS ON YOUR PHONE 4G’s fast data speeds jump-started the app revolution. It’s still not fast enough to handle truly data-inten...
    VIEW MORE
  • New Smart Memory Controller, Breaking Through the Memory Bandwidth Bottleneck

    New Smart Memory Controller, Breaking Through the Memory Bandwidth Bottleneck

    2019-08-12
    Microchip's new SMC 1000 8x25G serial memory controller enables CPUs and other compute-centric SoCs to utilize four times the memory channels of parallel attached DRAM within the same package footprint compared to DDR4. The SMC 1000 8x25G enables higher memory bandwidth and media independence for High Performance Computing (HPC), big data, artificial intelligence and machine learning compute-intensive applications with ultra-low latency. The SMC 1000 8x25G interfaces to the CPU via a narrow 8-lane differential Open Memory Interface (OMI)-compliant 25 Gbps interface and bridges to memory via a wide 72-bit DDR4 3200 interface. The product supports three DDR4 data rates, DDR4-2666, DDR4-2933, and DDR4-3200, resulting in a significant reduction in the required number of host CPU or SoC pins per DDR4 memory channel, allows for more memory channels and therefore increases the memory bandwidth available. The SMC 1000 8x25G also features an innovative low latency design which results in memory systems using the product to have virtually identical bandwidth and latency performance to comparable LRDIMM products. The SMC 1000 8x25G combines data and address into one unified chip compared to LRDIMM which utilizes an RCD buffer and separate data buffers. This device is a foundational building block for a wide range of OMI memory applications. These include Differential Dual-Inline Memory Module (DDIMM) applications such as standard height 1U DDIMMs with capacities from 16 GB to 128 GB and double height 2U DDIMMs with capacities beyond 256 GB. SMC 1000 8x25G also supports chip down applications to off the shelf Registered DIMMs (RDIMM) and NVDIMM-N devices. SMC 1000 8x25G integrates an on-chip processor that performs control path and monitoring functions such as initialization, temperature monitoring, and diagnostics. The device supports manufacturing test operations of attached DRAM memory. Microchip’s Trusted Platform support, including hardware root-of-trust ensures device and firmware authenticity and supports secure firmware update. Specifications SMC 1000 8x25G OMI Interface 1x8, 1x4 support OIF-28G-MR Up to 25.6 Gbps Link Rate Dynamic low power modes DDR4 Memory Interface x72 bit DDR4-3200, 2933, or 2666 MT/s memory support Supports up to 4 ranks Supports up to 16 GBit memory devices 3D stacked memory support Persistent Memory Support Support for NVDIMM-N module Support for NVDIMM-N modules Intelligent Firmware Open Source Firmware On-board processor provides DDR/OMI initialization, and in-band temperature and error monitoring ChipLink GUI Security and Data Protection Hardware root-of-trust, secure boot, and secure update Single symbol correction/double symbol detection ECC Memory scrub with auto correction on errors Peripherals Support Support for SPI, I²C, GPIO, UART and JTAG/EJTAG Small Package and Low-Power Power optimized 17 mm x 17 mm package ——Source:Microsemi
    VIEW MORE
  • CEVA and Immervision Enter into Strategic Partnership for Advanced Image Enhancement Technologies

    CEVA and Immervision Enter into Strategic Partnership for Advanced Image Enhancement Technologies

    2019-08-09
    Partnership includes $10 million technology investment from CEVA, securing exclusive licensing rights to Immervision's patented image processing and sensor fusion software portfolio for wide-angle cameras, which are broadly used in surveillance, smartphone, automotive, robotics and consumer applications MOUNTAIN VIEW, Calif. and MONTREAL, Aug. 6, 2019 /PRNewswire/ -- CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies, today announced that it entered into a strategic partnership agreement with privately owned Immervision, Inc. of Montreal, Canada. A developer and licensor of wide-angle lenses and image processing technologies, Immervision's patented image enhancement algorithms and software technologies deliver dramatic improvements in image quality and remove the inherent distortions associated with the use of wide-angle cameras, particularly at the edges of the frame. Immervision's technologies have shipped in more than 50 million devices to date through its broad customer base, which includes Acer, Dahua, Garmin, Hanwha, Lenovo, Motorola, Quanta, Sony and Vivotek. Under the partnership agreement, CEVA made a $10 million technology investment to secure exclusive licensing rights to Immervision's advanced portfolio of patented wide-angle image processing technology and software. This includes real-time adaptive dewarping, stitching, image color and contrast enhancement, and electronic image stabilization. CEVA will also license Immervision's Data-in-Picture proprietary technology, which integrates within each video frame fused sensory data, such as that offered by Hillcrest Labs (a business recently acquired by CEVA). This adds contextual information to each frame that enables better image quality, video stabilization and accurate machine vision in AI applications. The companies will also collaborate in licensing full end-to-end solutions comprised of Immervision's patented wide-angle Panomorph optical lens design and the complementary image enhancement software. Immervision's hardware-agnostic software portfolio will continue to be offered for all System-on-Chip (SoC) platforms containing a GPU (Graphics Processing Unit) and in a power-optimized version for SoCs containing the CEVA-XM4 or CEVA-XM6 intelligent vision DSPs. Along with Immervision's software, CEVA also offers a broad range of other computer vision and AI software technologies, such as the CEVA Deep Neural Network (CDNN) - a neural networks graph compiler, the CEVA-SLAM software development kit, and the CEVA-CV optimized computer vision software library. Gideon Wertheizer, CEO of CEVA, commented: "This strategic partnership and technology investment with Immervision provides CEVA with a significant market advantage for the fast growing wide-angle camera market, particularly in smartphones, surveillance, ADAS and robotics. Through the combination of Immervision's imaging technologies and CEVA's vision and AI software technologies, ...
    VIEW MORE
  • NVIDIA @ SIGGRAPH 2019: NV to Enable 30-bit OpenGL Support on GeForce/Titan Cards

    NVIDIA @ SIGGRAPH 2019: NV to Enable 30-bit OpenGL Support on GeForce/Titan Cards

    2019-08-05
    Kicking off Last week is SIGGRAPH, the annual North American professional graphics gathering that sees everyone from researchers to hardware vendors come together to show off new ideas and new products. Last year’s show ended up being particularly important, as NVIDIA used the show as a backdrop for the announcement of their Turing graphics architecture. This year’s NVIDIA presence is going to be far more low-key – NVIDIA doesn’t have any new hardware this time – but the company is still at the show with some announcements. Diving right into matters then, this year NVIDIA has an announcement that all professional and prosumer users will want to take note of. At long last, NVIDIA is dropping the requirement to use a Quadro card to get 30-bit (10bpc) color support on OpenGL applications; the company will finally be extending that feature to GeForce and Titan cards as well. Dubbed their Studio Driver: SIGGRAPH Edition, NVIDIA’s latest driver will eliminate the artificial restriction that prevented OpenGL applications from drawing in 30-bit color. For essentially all of the company’s existence, NVIDIA has restricted this feature to their professional visualization Quadro cards in order to create a larger degree of product segmentation between the two product families. With OpenGL (still) widely used for professional content creation applications, this restriction didn’t prevent applications like Photoshop from running on GeForce cards, but it kept true professional users from using it with the full, banding-free precision that the program (and their monitors) were capable of. So for the better part of 20 years, it has been one of the most important practical reasons to get a Quadro card over a GeForce card, as while it’s possible to use 30-bit color elsewhere (e.g. DirectX), it was held back in a very specific scenario that impacted content creators. But with this latest Studio Driver, that’s going away. NVIDIA’s Studio drivers, which can be installed on any Pascal or newer GeForce/Titan card – desktop and mobile – will no longer come with this 30-bit restriction. It will be possible to use 30-bit color anywhere that the application supports it, including OpenGL applications. To be honest, this wasn’t a restriction I was expecting NVIDIA to lift any time soon. Rival AMD has offered unrestricted 30-bit color support for ages, and it has never caused NVIDIA to flinch. NVIDIA’s official rationale for all of this feels kind of thin – it was a commonly requested feature since the launch of the Studio drivers, so they decided to enable it – but as their official press release notes, working with HDR material pretty much requires 30-bit color; so it’s seemingly no longer a feature NVIDIA can justify restricting from Quadro cards. Still, I suppose one shouldn’t look a gift horse too closely in the mouth. Otherwise, at this point I’m not clear on whether this is going to remain limited to the Studio drivers, or will come to the regular “game ready” GeForce dr...
    VIEW MORE
  • Toposens Launches TS3 Ultrasonic Sensor

    Toposens Launches TS3 Ultrasonic Sensor

    2019-08-02
    Toposens recently announced the release of their current flag-ship product TS3, a 3D ultrasonic sensor suitable for a wide range of applications in the autonomous systems market that require a strong need for reliable object detection and situational awareness. In comparison to common ultrasonic sensors, which usually measure the distance only to the closest reflecting surface, Toposens’ new 3D sensors achieve a wide field of view of up to 160° and provide simultaneous 3D measurements for multiple objects within the scanning area. The operation thus mimics the echolocation techniques used by bats and dolphins for navigation and orientation in the wild. The new TS3 sensor combines carefully selected hardware components with proprietary signal processing algorithms. It is ideally suited for indoor robotic navigation and object avoidance. Systems benefit from its real-time processing capabilities while keeping data transmission bandwidth and power consumption low, which is especially important for battery powered robots. Exemplary use cases include home cleaning robots and delivery/service robots. The TS3 sensor enables them to reliably map an environment with minimal processing power and to localize themselves in predefined maps to execute complex path planning algorithms. TS3 sensors perform independently of ambient light conditions and are even capable of detecting mirroring and transparent surfaces; adding an additional layer of safety where optical sensors tend to fail. For even higher reliability the generated 3D point cloud can easily be fused with data from other system-relevant sensors. The new TS3 sensor is an embedded sensor system that sends out ultrasound waves in a frequency range inaudible by humans. An array of microphones subsequently records the echoes from all objects in the sensor’s vicinity and computes their location in a 3-dimensional space. It therefore creates an entirely new way of ultrasonic sensing for autonomous systems. “Because our new “Bat Vision” TS3 sensor is compact, affordable and integration-ready,” explains Tobias Bahnemann, Managing Director of Toposens. “Engineers can easily add it to their sensor stacks to replace or complement their existing optical sensing systems, providing both redundancy and an improved level of accuracy compared to standard ultrasonic sensors in various autonomous navigation applications.” The core technology is based on the Toposens’ SoundVision1TM chip making the sensor system easily adaptable to a variety of product designs. This qualifies the TS3 as the perfect technology platform for developing next-level mass market vehicles in robotic and even automotive use cases like automated parking and next-level ADAS functionality. Technical specifications include a detection range of up to 5 meters and a scan rate of approximately 28 Hz. The TS3 returns up to 200 points per second with each 3D point corresponding to the cartesian coordinates and an additional volume measurement of the ultr...
    VIEW MORE
  • Aspinity smart-sensing edge architecture tackles power- and data-efficiency problems

    Aspinity smart-sensing edge architecture tackles power- and data-efficiency problems

    2019-07-29
    Aspinity, a semiconductor startup funded by Alexa Fund and others, and based in Pittsburgh, USA, recently announced the first smart-sensing edge architecture to tackle the power- and data-efficiency problems in the billions of battery-powered consumer electronics, smart home systems, and predictive-maintenance devices on which we increasingly rely. Aspinity announced its reconfigurable analog modular processor (RAMP) platform, an ultra-low power, analog processing platform, that overcomes the power and data handling challenges in battery-operated, always-on sensing devices. Incorporating machine learning into an analog neuromorphic processor, Aspinity’s RAMP platform enables 10x power savings over older architectures. Devices can now run for months or years, instead of days or weeks, without battery recharge or replacement. Smart-sensing edge architecture Elaborating on Aspinity’s smart-sensing edge architecture, Tom Doyle, CEO and founder, said that Aspinity offers a fundamentally new architectural approach to conserving power and data resources in always-on devices. The scalable and programmable RAMP technology incorporates powerful machine learning into an ultra-low power analog neuromorphic processor that can detect unique events from background noise before the data is digitized. By directly analyzing the analog raw sensor data for what’s important, the RAMP chip eliminates the higher-power processing of irrelevant data. System designers can now stop sacrificing features and accuracy for longer battery life. Aspinity’s analyze-first approach reduces the power consumption of always-sensing systems by up to 10x and data requirements by up to 100x. The RAMP chip’s analog blocks can be reprogrammed with application-specific algorithms for detection of different events and different types of sensor input. For example, designers can use a RAMP chip for always-listening applications, where the chip conserves system power by keeping the rest of the always-listening system in a low power sleep state, until a specific sound, such as voice or an alarm, has been detected. Unlike the other sensor edge solutions for voice activity detection, the RAMP chip also supports voice-first devices by storing the pre-roll data required by wake word engines. For industrial applications, designers can use a RAMP chip to sample and select only the most important data points from thousands of points of sensor data: compressing vibration data into a reduced number of frequency/energy pairs and dramatically decreasing the amount of data collected and transmitted for analysis. This is the USP for the RAMP platform. With so many ways to program a RAMP core, as well as broad algorithm support for different types of analysis and output, the RAMP chip uniquely enables a whole new generation of smaller, lower-cost, more power- and data-efficient, battery-operated, always-on devices for consumer, IoT, industrial and biomedical applications. Much longer battery life Short batter...
    VIEW MORE
  • OptimalPlus launches Lifecycle Analytics Solution for ADAS

    OptimalPlus launches Lifecycle Analytics Solution for ADAS

    2019-07-26
    OptimalPlus, a specialist in lifecycle analytics solutions, has launched a new Lifecycle Analytics Solution for Advanced Drivers Assistance System (ADAS) cameras. The offering provides manufacturers of ADAS with real-time data insights based on big data and machine learning to optimise production and increase product quality, according to the company. ADAS cameras are electro-optical systems that aid vehicle drivers and are intended to increase car and road safety and are a cornerstone of the technologies being developed to support autonomous and semi-autonomous vehicles. The manufacturing of ADAS cameras is a highly complicated and costly process, involving the integration of multiple specialised and sensitive sensors in a series of irreversible processes to create a high-performance system. This has made it difficult for manufacturers to detect defective products during the assembly process, resulting in unpredictable performance that remains undetermined until the camera system is fully assembled and tested. As a result, new designs of ADAS camera manufacturers struggle with extremely high scrap rates of around 25% on already expensive systems, leading to a high cost per unit and impeding the rate automakers can integrate the systems into their cars. “Automakers need a holistic solution that can provide the big picture about the health of a vehicle,” said Dan Glotter, CEO of OptimalPlus. “We are enabling automakers to take full advantage of the potential offered by new technologies while removing concerns about manufacturing quality products cost-efficiently. The next two waves facing the automotive industry are autonomy and electrification, and both are going to bring enormous manufacturing complexities, requiring new analytics methods for faster product ramp, reduced scrap rates, and improved quality & reliability.” Assembling ADAS cameras relies on a complicated supply chain to provide electronic and optical components from different geographical locations, all with different methods of ensuring and monitoring reliability, and with manufacturers relying on separate silos of product data and information, it is exceedingly difficult to ensure that these components will perform up to the required standards. OptimalPlus looks to address these issues by providing much greater visibility throughout the supply chain, connecting supplier data to field performance, enabling a full overview of production, increasing efficiency and enabling preemptive actions to find problematic products earlier in the manufacturing cycle and preventing unreliable products from being deployed or removing them in real-time from the factory floor, reducing scrap rates and avoiding costly recalls. “As systems such as ADAS cameras, that are the backbone of autonomous vehicles, become critical for safe driving, guaranteeing system quality is only going to become more important. As such OEMs are going to demand accountability from their technology providers on all suppl...
    VIEW MORE
  • Comment: Bluetooth as the smart building protocol of choice

    Comment: Bluetooth as the smart building protocol of choice

    2019-07-22
    With more than 374 million Bluetooth smart building devices expected to ship annually by 2023 there’s plenty of opportunity for developers and engineers to continue to join the Bluetooth mesh networking revolution. What is it exactly that is making everyone go mad for mesh? The new mesh capability enables many-to-many device communications and is optimised for creating large-scale device networks. Since its release in July 2017, Bluetooth mesh has become the clear choice for large-scale device networks. Already, more than 200 products with mesh networking capability have been qualified from leading silicon, stack, component, and end-product vendors. At the Bluetooth SIG we are seeing a tremendous amount of momentum behind the mesh networking capabilities of Bluetooth, including large players in the Smart Home market like Alibaba and Xiaomi making strategic platform decisions to support Bluetooth mesh networking for developers using their smart home platform. Commercial and industrial environments demand a solution that can reliably and securely connect tens, hundreds or even thousands of devices within a robust, low-latency, large-scale device network. Indeed, your choice of smart building protocols to automate whole facilities or enable wireless sensor networks at commercial scale can make or break your solution or product launch. Reliability The reliability of a mesh network is judged on its ability to deliver a message from one device to another. Bluetooth mesh is no exception and uses two forms of message relay to ensure uninterrupted message delivery, including; peer-to-peer communications and multipath message relay. But what exactly are these two methods and how do they work? With peer-to-peer messaging in Bluetooth mesh, you can expect all nodes to communicate directly with one another. There is no centralised hub, which means if one node breaks down there is no domino effect and no single point of failure to your network. Multipath messaging enables Bluetooth mesh to use a managed flood message relay architecture that is self-healing for reliable message delivery. If a pathway is blocked, the message can simply take an alternative route, allowing wireless installations to achieve the trouble-free performance and scalability of wired systems. Security One of the most discussed issues related to the smart building applications is security. Does Bluetooth mesh networking have a security architecture that is designed to address the pressing concerns of companies deploying a large-scale wireless device network? The answer is yes. To ensure all mesh messages are encrypted and authenticated, all communication is secured using AES-CCM with 128-bit keys – essentially a cryptographic algorithm that requires secure confidentiality and government-grade authentication. Don’t assume this all to be on one layer either. At the heart of Bluetooth mesh security are three types of security keys— device keys, application keys, and network keys. These keys p...
    VIEW MORE
first page 1 2 3 4 last page

A total of 4 pages

leave a message welcome to fic
If you have any problem when using the website or our products, please write down your comments or suggestions, we will answer your questions as soon as possible!Thank you for your attention!

Home

Solutions

about

contact