Home / Technology / Michael Hurlston: How Synaptics pivoted from mobile/PC sensors to the internet of things

Michael Hurlston: How Synaptics pivoted from mobile/PC sensors to the internet of things

Synaptics pioneered sensors for touchscreens for PCs and mobile devices. But the San Jose-based hardware company has shifted to where the processing is happening — at the edge of the network.

Under CEO Michael Hurlston, the 35-year-old company has pivoted away from its early markets and focused on artificial intelligence at the edge to bring greater efficiency to internet of things (IoT) devices. With AI at the edge, the company can process the sensor data that it collects and only send alerts when they’re relevant to the network.

Hurlston said that processing paradigm will offload crowded home and business networks and ensure privacy for customer data that doesn’t have to be stored in big datacenters. In the company’s most recent quarter ended December 31, the internet of things now accounts for 43% of the company’s overall $ 358 million in quarterly revenue, while the PC is 26% and mobile is 31%. Synaptics has 1,400 employees.

Synaptics’ customers now span consumer, enterprise, service provider, automotive, and industrial markets. IoT markets for chips are expected to grow 10% to 15% a year, and the company recently picked up better wireless chip products from Broadcom. Synaptics also launched its new Katana low-power AI processors for the edge. I spoke with Hurlston, who has been in the top job for 18 months, about this transformation.

Here’s an edited transcript of our interview.

Above: Michael Hurlston is CEO of Synaptics.

Image Credit: Synaptics

Michael Hurlston: You understand the business probably better than most. We’ve been thought of as mobile, mobile, mobile, and then maybe a PC subhead. We’ve tried to move the company into IoT, and then mobile where we can attack opportunistically. We’re trying to make IoT our big thrust. That’s what came out this quarter. IoT was our largest business. People started believing that we could make that happen. That’s the main thing.

VentureBeat: What do you think about the future in terms of haptics and the sense of touch? I’m a science fiction fan, and I just got done with the latest Ready Player Two book. They had a VR system in there that could reproduce all of the senses for you.

Hurlston: It sounds both interesting and dangerous.

VentureBeat: Starting where we are, though, do you see anything interesting along those lines that’s coming along?

Hurlston: With the AR/VR glasses, that’s been an interesting intersection of our technology. We have these display drivers that create the ultra-HD images you can see. There’s touch that goes with it, typically, and a lot of the systems have a video processor that feeds the images into the glass. All of those things, we supply them. The AR/VR market has been good for us. It’s obviously still pretty small, but I’m much more optimistic that it’s going to take off. It plays nicely to the bag of technologies we have in the company.

Haptics is advancing. We don’t have haptics today. We do all the touch controllers on glass surfaces. Where we are trying to invest is touch on non-glass surfaces. We can see things coming — headsets are a good example, where you’re trying to touch a piece of plastic and generate sensation through there. In automobiles, on steering wheels, on things like that. We’re trying to move our touch sensors from a typical glass application to other areas where glass isn’t present, and trying to generate accuracy and precision through plastics or other materials.

VentureBeat: It’s interesting that you’re moving into IoT, and IoT devices are getting to the point where you can put AI into them. That feels like quite an advance in computing.

Hurlston: What’s going on for us, and this is something probably in your sweet spot to think about — a lot of companies now do these audio wake words, where you’re waking up a Google Home or Alexa using voice, and some simple commands are processed on the edge. The wake up doesn’t have to go to the cloud. What we’re trying to advance is a visual wake word, where we can have AI in a low-power sensor that can detect an incident, whether it’s people coming in a room or chickens moving in a coop.

We have agricultural applications for the idea, where you’re counting or sensing livestock. Counting people might apply to, do I need to turn an air conditioner on or off? Do I need to turn a display on or off? Do I need to reduce the number of people? Maybe now, in the COVID environment, you have too many people in a room. You have this low-power battery sensor that can be stuck anywhere, but rather than using voice, have a camera attached to it, a simple camera, and do some intelligence at the edge where we can identify a person or something else. Maybe the wind blowing and creating an event in front of the camera. We have a bit of inferencing and training that can happen on the device to enable those applications.

VentureBeat: It feels like we need some choices among those sensors, too. There’s a lot of places where you don’t want to put cameras, but you want that 3D detection of people or objects. You don’t want to put face recognition in a bathroom.

Hurlston: Right. That’s why these little low-power sensors can do that. They can detect motion where you don’t want to have full recognition. It can just detect that something in here is moving, so let’s turn on the lights. Particularly for industrial applications where you want to save power. It all makes sense and flows. We can have pretty high precision, where you do face recognition because there’s an AI network on the chip, but you can also just do simple motion and on/off. It just depends on how precise you need your sensor to be.

Above: Synaptics is driving into the automotive chip market.

Image Credit: Synaptics

VentureBeat: Do we credit Moore’s Law for some of this advance, being able to put more computing power into small devices? I suppose we can also credit neural networks actually working now.

Hurlston: It’s more the latter. We got reasonably good at neural networks on a high-power chip, and we were able to train the classic things. You talked about facial recognition or seeing in the dark, where we can pull out an image and train, train, train with very low light. Light turns out to be measured in luxes, which is candlelight, and we can pull out an image now with 1/16 of a lux. That’s almost total darkness. You can’t see it with your eyes, but you can pull out and enhance an image in low light.

We did that first. We developed the neural networks on high-power chips, and then migrated it to lower-power, and obviously shrunk it in the process. We were able to condense the inferencing and some of the training sequences on that low-power chip. Now we think we can deliver — it’s not going to be the same use case, but we can deliver at least an AI algorithm on a battery-powered IC.

VentureBeat: It feels like that’s important for the further existence of the planet, with things like too much cloud computing. AI at the edge is a more ecologically sound solution.

Hurlston: We’re seeing two key applications. One is obvious, and that’s power consumption. All this traffic that’s cluttering up the datacenters is consuming gigawatts, as Doc Brown would say, of power. The other one is privacy. If the decisions are made on the edge, there’s less chance that your data gets hacked and things like that. Those are the two things that people understand very simply. The third bullet is latency, making decisions much faster at the edge than having to go back to the cloud, do the calculation, and come back. But the two most important are power and privacy.

VentureBeat: Did you already have a lot of people who can do this in the company or did you have to hire a new kind of engineer to make AI and machine learning happen?

Hurlston: It’s a confluence of three things. We initially had this for video. If you look back at when we adopted it on higher-power chips that are more generally understood for machine learning, there we had to bring in our own talent. Our second step was to take an audio solution. The original idea was the wake word, following the market trend to do compute at the edge for voice. We had taken these AI and machine learning engineers, shrunk the neural network, put it into an audio chip, but we found we were behind. A lot of people can do all that wake word training. The third leg of the stool was we recently announced a partnership with a company called ETA Compute. It’s a small startup in southern California. They had a lot of machine learning and AI experts. The big language is TensorFlow, and they have the compiler that can take the TensorFlow engine and compile it into our audio chip.

The confluence of those things created this low-power AI at the edge solution that we think is different. It has quite a bit of market traction. But it’s a totally different approach to apply what I call “visual wake word” to this whole space.

VentureBeat: It seems like a good example of how AI is changing companies and industries. You wouldn’t necessarily expect it in sensing, but it makes sense that you’d have to invest in this.

Hurlston: You’ve covered technology for long enough, and you’ve been through all the cycles. Right now, the AI cycle is there. Everybody has to talk about it as part of the technology portfolio. We’re no different. We got lucky to a certain extent because we’d invested in it for a pretty clear problem, but we were able to apply it to this new situation. We have some runway.

Above: Synaptics provides chips for DisplayLink docking stations.

Image Credit: Synaptics

VentureBeat: When it comes to making these things better, either better at giving you the right information or better at the sensing, it feels like where we are with the current devices, we still need a lot of improvement. Do you see that improvement coming?

Hurlston: It comes from training data. You know better than most that it’s all about being able to provide these neural networks with the right training data. The hardest problem you have is generating datasets on which to train. Before I came here, I was at a software AI company. I spent a lot of time — we participated in a very interesting competition. The University of North Carolina had all the software AI companies together, and we were shown different dogs, from a chihuahua to a German shepherd to a pit bull. Who could best identify a dog and call it a dog from a series of pictures? They tried to throw giraffes in and things like that.

In the competition, we didn’t win, but the winner was able to get dogs to about 99% accuracy. It was amazing how well they were able to get their dataset and training to be able to identify dogs. They took the picture and they flipped it upside down, though, and nobody could get it. Once it was upside down, nobody could identify it as a dog as well as people had done when it was right side up. This thing is all about being able to train, to train on the corner cases.

This low light thing we’ve done on our video processor, we take snapshots over and over again in super low light conditions to be able to train the engine to recognize a new situation. That’s what this is all about. You know the existing situation. It’s being able to apply the existing to the new. That’s a lot harder than it sounds.

VentureBeat: If we get to the actual business, what’s doing well right now, and what do you think is going to be the source of major products in the future?

Hurlston: We’re sort of IoT of IoT. Within IoT, what our business we call IoT — we have lots of different technologies. We touched on our audio technology. That’s done very well. You have headsets that are going into a lot of work-from-home situations, with the over-ear design and active noise canceling. That business has done super well for us. We have Wi-Fi assets. We did a deal last year where we bought Broadcom’s Wi-Fi technology that they were applying to markets other than mobile phones. That business has done super well. We have docking station solutions, video processors applied to docking stations, or video conferencing systems. That’s done well for us.

In IoT, we have lots of different moving pieces, all of which are hitting at the moment, which is understandable. Work from home is good for our business. Wi-Fi in general — everything needs to be connected, and that’s driven our business. It’s been a lot of different moving parts, all of them moving simultaneously in a positive direction right now.

VentureBeat: How much emphasis do you see on IoT versus the traditional smartphone space or tablets?

Above: Synaptics is brining AI to IoT devices.

Image Credit: Synaptics

Hurlston: Smartphones is an area where we’ve done well historically as a company. Our business there was display drivers, and then the touch circuit that drives the panel. We’ll continue to play there. We’re going to approach that business, I would say, opportunistically, when we see a good opportunity to apply our technology to mobile.

But touch and display drivers — you touched on this with one of your first questions. That’s becoming more IoT-ish. Our technology that had done well in mobile, we’ll obviously continue to play in mobile where we can, but that market is competitive. A lot of players in it. Margins are tight. But what’s interesting is the market is much more open in AR/VR glasses, in games, in automobiles. We can take that same touch and display driver technology, reapply it to different end markets, and then you have something that looks more IoT-ish and commands better prices, better gross margins, things like that.

VentureBeat: As far as the role of a fabless semiconductor chip designer versus making larger systems or sub-systems, has anything changed on that front for you?

Hurlston: We’re almost entirely chips, and obviously I think that gets us further upstream of technology, given the fact that we have to drive our chips. That goes into sub-systems that ultimately go into end products. Given the lead times, we see these technical trends before others do, like this concept of the visual wake word. That’s something we’re getting out in front of.

We do sub-systems here and there. We’re unique in that context. Our historic business is the touch controllers for PCs and fingerprint sensors. Some of the PCs have fingerprint sensing for biometrics. In some cases, we’ll make that whole sub-assembly — not just the IC that does the discrimination of where your finger is, but the entire pad itself and the paint and so on. Same with the fingerprint sensor. But that’s an increasingly small part of our business. Even our historic PC business, we’re getting more into chip sales than we are into sub-assembly sales.

VentureBeat: How many people are at the company now?

Hurlston: We have about 1,400 people, most of whom are engineers, as you’d expect.

VentureBeat: On the gaming side, do you see much changing as far as the kind of detection or sensing that’s going on?

Hurlston: AR/VR is going to be a much bigger thing. For the displays, that seems to be changing a lot as well, particularly in handheld games. You have the move from some of the pioneers to go to OLED. OLED has characteristics relative to latency and other things that are not particularly ideal. You can see it move — a lot of the gaming guys are talking about mini-LED or micro-OLED, which has much faster properties than the traditional OLED. We see display changes on the horizon. We’re trying to gear our technology up for that if and when those come up.

VentureBeat: What sort of applications are you looking forward to that don’t exist today?

Hurlston: We talked about embedded touch. We talked about the push for augmented reality, although of course that’s already here. We talked about these low-power visual sensors. That’s an area in which we’re pushing. We continue to evolve our video display technology into higher resolution, both panels and displays. Obviously being able to take lower bitstreams and upconvert those — that’s where we apply a lot of our AI in the video sector, upconversion from a lower pixel count to a higher pixel count. Those are the big vectors.

With these low-power sensors, again, it comes back to getting at — in my view the big application is just solving energy. It’s not necessarily a consumer problem. But it’s not just the energy required on chip to go back and forth to the datacenter. It’s now having a lot more control of light and power and air conditioning to turn that on and off. We’re trying to take the technology, in a micro sense — it’s more environmental, and that’s obvious when you have AI at the edge. But we’re then applying it to a more macro problem, which is the useless energy consumption that happens all the time. We’re trying to drive that message and apply the technology to that problem to the extent that we can.

Above: Synaptics still makes voice biometic chips.

Image Credit: Synaptics

VentureBeat: It feels like without some of these things, IoT was either incomplete or impractical. If you didn’t have energy efficiency or AI, you were brute-forcing these things into the world. You’d either need a lot more sensors or you were causing more pollution, whether on the network or in terms of the number of devices. When you add AI and energy efficiency, it feels more sensible to deploy all these things.

Hurlston: That’s absolutely true. Maybe taking it one step further back, having wireless connectivity has been a huge enabler for these kinds of gadgets. I never imagined that I’d have a doorbell that had electronic gadgets in it. I never imagined that you’d have a bike that has electronic gadgets in it. IoT started with low-power wireless connectivity that enabled things like scales or smoke detectors or bicycles to connect to other things. That was one.

Then, to your point, the next step in the evolution has been adding AI and other sensors to a connected device to make it more useful. I’ve been surprised by how many things we’re getting into on the wireless that have this connectivity. It’s crazy stuff that you wouldn’t imagine. That was the first enabler, the low-power wireless, whether it’s Bluetooth or wireless LAN or in some instances GPS. That capability is key. We have a Bluetooth and GPS chip inside a golf ball. It’s pretty obvious what the use case is. But think about that. OK, I can find my ball when it’s at the bottom of the lake. It started with the wireless connectivity.

VentureBeat: I wrote a story about one of the companies that are doing neural networks inside hearing aids. I never thought it would be useful in that context, but apparently they’re using it to suppress noise. It recognizes the sounds you don’t want to hear and suppresses them so you only hear people talking to you.

Hurlston: Right, you have to pick out the right frequencies. Going back to your point, the second leg of the stool is certainly AI now. Whether it’s voice or visuals as we’ve been discussing, you need AI as the second leg. You’d be surprised at where you can put these simple neural networks that make a difference.

VentureBeat: The new multimedia processor you just announced, can you talk about that?

Hurlston: That’s really slotted for these set-top box applications. It was the starting point — when we talked about the AI journey, we have bigger video processors where we can do training on the chip around object detection. The big use case in this particular area is around enhancing the video, being able to upscale from a low bitrate to a higher bitrate if your feed is relatively modest, like on these Roku streamers. You can get a really low bandwidth if you’re challenged as far as your internet connection. We can upscale the video using these processors, which is what the neural network is for.

The real catalyst for us is to get into a rather bland market, which is the service provider set-top box market, where we think we have some unique advantages. We can make a good business out of that. Another cool application we just announced is a voice biometrics partnership with a company that does voice prints. Instead of just recognizing a word, you recognize the speaker. That’s running on that same processor.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

VentureBeat

About

Check Also

The scale of ambition in gaming is getting bigger | Brian Ward fireside chat

The scale of ambition for Saudi Arabia when it comes to moving into the games …