IoT, Artificial Intelligence and How They Transform Interaction with Physical World


By Dennis Clemente

NEW YORK—When a meetup isn’t just a meetup, it’s an actual learning experience. Vaughn Shinall, head of product outreach at Temboo, did more than the usual company profile in his talk by providing the audience with some valuable tips for bringing IoT (Internet of Things) to anyone’s business at the Hardwired meetup last November 16 at WeWork in Chelsea. It was a lesson about how IoT and artificial intelligence (AI) can help people interact with the physical world.

Shinall’s Temboo, which offers software stack for IoT applications, gave the following tips:

  1. Start with a small but real, concrete problem
  2. Focus on saving time or money to create real value at the start
  3. Quick wins will help build confidence and expertise for IoT
  4. Get internal backing based on having a working system
  5. See how the data and functionality you’ve created can have additional uses
  6. See how existing applications can be modified for other uses
  7. Build new IoT capabilities on top of existing ones

Providing these tips is essential, as over half of business processes are projected to incorporate IoT by 2020, with about 22 billion IoT devices estimated to be connected already to the internet by 2018.

Shinall showed a factory that has retrofitted its existing operations IoT capabilities to reduce waste. It added automated alerts and sensors to its processes.

It was the modular music studio BLOCKS, however, that was the highlight for the evening for people hearing it for the first time. ROLI, the music tech startup behind it, has raised $43 million from FirstMark Capital. It will reportedly be in all Apple stores globally this holiday season.

The other presenters were Charlie Key, founder and CEO of Losant (IoT solution platform); David Lyman, founder and CEO of BetterView (drone marketplace for aerial photography jobs) and Leif Jentoft, co-Founder of RightHand Robotics (intelligent machines for e-commerce order fulfillment)

Key of Losant talked about real time GPS asset tracking which is expected to grow, as sensors, GPS units and cellular modems have become readily available.  About 38 billion devices are equipped with tracking capabilities. As such, many now see the value of tracking the location and health of nearly everything, including shipments.

The actual devices used will rely on cost, physical size, environmental conditions, geographical location and many more. Losant provides systems integrators and product manufacturers with the flexibility to choose and connect to any hardware using any communication method on any network. Its application services and additional platform capabilities cover remote asset management, GPS tracking and mapping, reporting and M2M data integration.  Understanding GPS data natively to visualize locations and geofence the information is crucial.

How does it make money? “People pay us based on data points,” explaining that the compay “works with companies with physical assets like tow trucks.”

As a platform for capturing and analyzing drone data, Lyman of BetterView claimed that they have software that makes it easy to capture data.  It reportedly combines drone-gathered, expert-analyzed imagery with public data like assessor’s permit, fire station proximity, and historical weather to pinpoint risks, estimate costs, and drive action around buildings and properties.

Founded two years ago, BetterView combines public data, drone imagery and computer vision plus human experts to analyze data to its 70 customers. It claims to have a 3,500 pilot network, analyzed, 4,200 rooftops or the equivalent of 130 million square feet.

Lyman said if you’re too early (in the drone space), you can get burned. If it holds its promise, he estimates the industry to rake in 1.8 million sales in by 2020. “We see adoption in commercial business.”

Already, drones and AI are improving insight and transforming how we interact with the physical world.

Another presenter, RightHand Robotics provides end-to-end solutions that reduce the cost of e-commerce order-fulfillment of electronics, apparel, grocery, pharmaceuticals, and countless other industries.

Facebook Shows Roadmap to AI; Qubole Addresses Big Data’s Low Success Rate


NEW YORK—Why do companies struggle with Big Data and why is Ashush Thusoo, founder and CEO at cloud-scale data processing Qubole, concerned about it? The answer is obvious: Big Data gives you competitive advantage if companies can manage it; unfortunately, not all the time. It has been reported that only 27 percent of Big Data initiatives are classified as successful in 2014.

What are the impediments of aspiring data-driven enterprises? Thusoo enumerated it as follows: a rigid inflexible infrastructure, non-adaptive software services, highly specialized systems, and how it’s difficult to build and operate.

Thusoo was joined by Antoine Bordes, AI research scientist at Facebook; Katrin Ribant, founder and CSO at Datorama and Nick Elprin, founder and CEO at Domino Data Lab last October 26 at the Data Driven meetup at Bloomberg. The meetup, hosted by First Mark Capital’s Matt Turck, was back at Bloomberg after holding several events at AXA Equitable Center.

Why is data important? “You get left behind (if you don’t tackle it),” Thusoo said. “Data has been the driver. What data can do if you open it up if you make it available on the cloud? A cloud-based SaaS approach is turnkey. Get there quickly.”

“There has been a marked change in cloud. It’s more secure now. It keeps everybody honest,” he added.

Qubole simplifies, speeds and scales big data analytics workloads against data stored on AWS, Google, or Azure clouds.

It’s refreshing to hear a grounded perspective on the state of artificial intelligence.

An AI research scientist at Facebook, Bordes showed decks showing how computer vision is not full-proof yet as he showed photos where the captions were not translated properly. “We have 20-year roadmap to AI.”

A team of 80 researchers at the social network is making use of synthetic tasks, even Wikipedia, as it works on improving AI. “Our motivation for work is not project driven. Our mission is to advance what AI can do. Everything we do should be applied to any format/visual input.” These include bAbl tasks, end-to-end memory networks, neural reasoner, and dynamic memory networks.

Facebook now has 1 billion stories posted every day; 100 million hours of video watched every day and 2 billion photos shared every day.

Domino Data Lab’s Elprin, for his part, talked about his belief in experimental agility and deployment agility. He thinks the best organizations work together as teams to advance common knowledge, even if “many data scientists think of their work as a solo act.

“Great outcomes come from a culture of discipline, collaboration, constant improvement,” he said.

Dataroma’s Ribant, which offers Big Data management for advertisers and ad agencies, talked about how marketing with data puts you ahead of the digital revolution

“We’re an end-to-end marketing analytics platform. Next year, we will apply machine learning to create insights from existing data assets.”


‘Winners of Tomorrow Will Have Artificial Intelligence,’ says VC


By Dennis Clemente

NEW YORK— The Data Driven meetup has always been an effective mix of show-and-tell demos and fireside chats with its guests. Last September 27, New York’s most well-attended meetup held its most inspired event this year with its impressive lineup of guests, packing every inch of the cavernous 480-seater AXA Equitable Center. A four-panel group of VCs from Silicon Valley talked candidly about building businesses around artificial intelligence while other speakers talked about the new things they are doing in their companies.

Steady host Matt Turck of First Mark Capital interviewed the VCs Jeff Chung, managing director at AME Cloud Ventures, Mike Dauber, general partner at Amplify Partners; Jake Flomenberg, partner at Accel and Aditya Singh, partner at Foundation Capital.

“Winners of tomorrow will be because AI was behind their product,” Singh said.

The adoption stage is early, but Singh believes “customers want solutions, not individual pieces,” he added while emphasizing how his company “helps you get customers and establish product-market fit.”

Flomenberg, for his part, thinks “we have a loose definition of AI.” He sees potential in computer vision.

Dauber thinks AI is real—if only the hype just finds the right mix but then there’s Google. “Google is who I am worried about. I think they can beat us senseless. On top of that, he thinks “access to (second round) capital is not easy,” he said.

Chung looks forward to having medical records scanned that leverages big data.

The healthcare industry is an endless curiosity for VCs, but Dauber probably put it best, “Healthcare is the most exciting and terrifying vertical,” adding how it faces so many regulations.

Even if money flows to startups in the artificial intelligence space, Dauber thinks “technical people are hard to find” to expedite any development.

Chung agrees: “It’s a challenge if you don’t have a strong foundational team. ‘It is a challenge whether you are here or in San Francisco. Many are coming from academia”

The summer hiatus certainly did the Data-Driven Meetup some good as it offered more interesting presentations.

Other guests were Noah Weiss, head of Search, Learning, & Intelligence at Slack; Praveen Murugesan, engineering manager at Uber and Jeremy Stanley, VP Data Science at Instacart (the one-hour grocery delivery platform). Weiss talked about Uber’s beginnings, how it unfurled from IRC chat, text messaging and Facebook. And lest everyone has forgotten, it made its start as a game.

“Macro trends plus the shift to mobile (formed) into a perfect storm,” Weiss said.

Now Slack is looking into addressing the increasing volume of communication by making people focus on the conversation they really need to read. Categorizing messages in terms of priority as well as having a fully “indexable” searc should help someone catch up with a team if he missed a day or two.

Carlos Guestrin, Amazon professor of Machine Learning at the University of Washington, and founder and CEO of Turi (a machine learning startup recently bought by Apple) also had a great presentation along with Kostas Tzoumas, founder and CEO of Data Artisans (a company implementing Apache Flink, stream data processing).

With Instacart, Stanley talked about how its 100 staff works to make sure it delivers within 60 minutes as it tries to capture its 600-million market with its product and retail partnerships. “Delivering orders really matters….(It’s) critical for customer happiness,” adding how it has achieved profitable unit economics driven in part by huge decreases in fulfillment time.

How does Uber operate in 75 countries and 500 cities? Murugesan credits its thousands of city operators; on-the-ground team who run and scale its transportation network and hundreds of data scientists and analysts as well as its engineering teams.

“We do A/B experimentations, spend analysis, build automated data applications,” he said, adding it has a scalable ingestion model – homegrown streaming ingestion solution and Hadoop Data Lake (no more limits to storage).

Guestrin exclaims, “Machine learning is hot, but can you trust it. How do we know they’re working? “You deploy a model and do A/B testing.”

He used Netflix as an example and how we trust its AI system.

How would you like images automatically tagged? Clarifai does it


NEW YORK—Last July 18, HUI Central featured Clarifai, the three-year old artificial intelligence company that focuses on visual recognition and solving real-world problems for businesses and developers in its midtown East office.

What problems? Imagine having hundreds of images but tagging each one of them on your site? That would be too much of a chore. Clarifai does the tagging for you when you upload them—automatically.

Presenter Cassidy Williams showed Clarifai’s powerful image and video recognition technology, built on machine learning systems and made available to developers via a clean API. Williams showed how the technology works using “convolution neural networks.” It reportedly improves its image recognition capability with consistent use.

Williams compared convolution to adjacent by saying the former is fast to train and can find multiple items whereas the latter offers no recognition of special structure but is good for finding a single item. Both, she said, creates a multilayer neural network.

What are convolution neutral networks? defines it “as biologically-inspired variants of MLPs. From Hubel and Wiesel’s early work on the cat’s visual cortex, the visual cortex contains a complex arrangement of cells. These cells are sensitive to small sub-regions of the visual field, called a receptive field. The sub-regions are tiled to cover the entire visual field. These cells act as local filters over the input space and are well-suited to exploit the strong spatially local correlation present in natural images.

“Additionally, two basic cell types have been identified: Simple cells respond maximally to specific edge-like patterns within their receptive field. Complex cells have larger receptive fields and are locally invariant to the exact position of the pattern.

The animal visual cortex being the most powerful visual processing system in existence, it seems natural to emulate its behavior. Hence, many neurally-inspired models can be found in the literature.”

Today, big companies are confident how deep learning can handle large data sets plus have greater computing power. It’s a game changer for AI prototyping. Not only that, it can serve as a boon for advertisers trying to pinpoint better use and even best timing for any use of photo or videos.

Clarifai has both a REST API that could be integrated with your preferred language along with a Python, Java and Node.js API. For more info, visit or

Apps are becoming more humanized

NEW YORK– New year, new name. Last February 22, the NUI (Natural User Interface) Meetup on its fourth year became the HUI (Humanized User Interface) Meetup. It was said to be a more accurate description as “the advancements today enable apps and devices to interact with us like we do with other people,” the organizer Ken Lonyai and Debra Benkler said.

As the hosts, the organizers talked about where HUI is headed, how to best use it in projects and products and how to develop HUI-based user experiences as well as use the plethora of APIs available right now.

Differentiating NUI from HUI, Benkler said NUI  as coined by Steve Mann, are actions that come naturally to human users–the user of nature as an interface itself. For many, the definition has supposedly come to mean any interface that is natural to the user.

Natural is not without without its issues while HUI is said to unify human-like experience, reducing barriers to human machine interactions, extending the benefits of technology and engaging greater segments of the population.

“HUI is multi-sensory and bi-directional. It mimics real world interactions. It’s immersive. It can make devices effectively invisible,” the hosts said.

The hosts discussed HUI technologies from touch, gesture, voice, eye tracking, object/facial recognition, among others.

On touch, supposedly the most underdeveloped HUI technology, Lonyai talked about trends in haptics. “Future haptics will stimulate temperature and viscosity. At this point, screen touch will be considered an HUI for a telepresence, in-air haptics, conductive fabric and real-world objects.

On gesture, Lonyai said there will be more uses of body movements to interact with a system. Typically, it requires a specialized equipment: 3D depth sensing camera (Kinect); electromyography device (Miyo); and ultrasound transducers. 3D depth cameras are largely peripherals but that is said to change in 2016.

How do 3D depth sensing cameras work? They project a field of infrared points and the points are read by the cameras to determine depth.

The point data is processed and a primitive image is created. It can also be used for skeletal tracking using algorithms. Changes in position can be measured and correlated to mean or do almost anything. Depth sensing cameras can also track heart rates.

For Lonyai, UX Best practices on gesture must know system limitations; design large interaction areas; minimize gorilla arms, avoid customer gestures and avoid creating occlusions in addition to using contextual-based affordances and consider cultural issues.

With gorilla arms, he was referring to how you can’t have your arms hanging for long periods of time, pointing out how Tom Cruise in “Minority Report” even got tired having his arms up during the shoot of the film.

What about object/facial recognition? Since humans can distinguish over 30,000 visual objects in a few hundred milliseconds, it definitely makes object-facial recognition interesting to explore. He cited how 2D and 3D APIs can make determination of facial “landmarks.” This means that the minute details of your face can be captured–the distance between the eyes, the width of the nose, the depth of the eye sockets, the shape of your cheekbones.

“It’s going to be all about ‘authentication vs identification’,” he said.

‘AI is stuck because it fell in love with stats and big data’

NEW YORK–”Why is AI (artificial intelligence) stuck?” asked Gary Marcus of Geometric Intelligence.”Because it has fallen in love with statistics and big data.” He was showing how, in so many ways, AI is not where we thought it would be by now. For example, one would expect translation online by now to be more precise but not, really. Quoting Peter Thiel, he also said: “We wanted flying cars instead we got 140 characters,” in reference to Twitter, of course.

Marcus was at the Data-Driven meetup last January 19 at Bloomberg. Marcus, a scientist, bestselling author and entrepreneur, had the crowd of data scientists, developers and business intelligence analysts chuckling along with his funny yet whip-smart and practical insights. He is also  professor of psychology and neural science at NYU.

The other presenters were Amir Orad, CEO of Sisense, which handles business intelligence for complex data; Shivon Zilis, investor at Bloomberg Beta, an early-stage VC firm; and Dan Scholnick, general partner at Trinity Ventures, a VC firm based in Silicon Valley.

Started 8 years ago, Orad likes to say how Sisense came about because of 5 data geeks who met in university and who wanted to make business intelligence understandable, cost-efficient and accurate,” adding how “the more complex your data the more you spend.”

Sisense is bringing disruptive simplicity for big data or multi-source data. He run a list of things the company is looking into: DBA to build database’ defining what data will be queried; joining tables upfront; normalizing and creating a star schema.

What lessons have they learned at Sisense? “Dream big. Refine benefits. Don’t automate, obliterate Disrupt, don’t improve. Be totally different, that’s the only way to offer value,” he said.

“Speed is not the end game but beginning of something else,” he added.

Shivon Zilis of Bloomberg Beta gave us updates on the companies that the venture capital fund is investing on–hundreds of them that she certainly had no time to explain but show, slide after slide, the logos of many recognizable names. She termed it an “explosion of activity” with “startups focusing on niches that provide immediate value”  

In all these investments, Zilis listed the following what-if scenarios that we certainly hoped can be solved: what if I had the same support as a Fortune 400 CEO?; what if I never had to feel lonely again; what if I never had to go to a primary care physician; what if I could measure the effectiveness of every word I said? what if I never had to drive again?


Some realistic expectation includes how in five years, it will be crazy for a farmer to overwater their fields or how in five years it will be crazy to ever hit “publish “without using a domain specific text optimizer, one that makes you smarter even when you’re not using it.

It was also good to hear Scholnick of Trinity Ventures say that his VC firm doesn’t outsource work to junior staff, which have become important for startups looking to reach the decision makers right away.  

As for hiring, he advised startups to make sure they’re hiring people with the right experience

Meet Amy, Larry, Alfred and Stefanshead, some app and text assistants

NEW YORK–Some apps certainly function as if they were invisible like Dennis Mortensen’s It’s an artificial intelligence powered personal assistant that schedules meetings for you.

Mortensen was again going the rounds with Amy, the name of his A.I. personal assistant who happened to be in the same room as Larry, which is Raad Ahmed’s text-responder of a lawyer, a mix of automation and human beings. Larry is the text version of Ahmed’s LawTrades. It’s personalized legal help tailored to your business over text.

Both presenters and other startups Alfred and Stefanshead were at The Product Hunt meetup last July 22 at Animoto’s offices.

Launched as a side project two months (and made in only 7 days), Ahmed said Larry has already sent/received over 10,000 text messages pertaining to all matters legal. Perhaps not many know this, but Ahmed said LawTrades maintains a community of vetted attorneys who “work at lower rates.”

So Larry is an even simpler for people to ask legal questions. You text, get help and then pay. “We’re never spammy,” he said to answer a question about a possible concern about giving away one’s phone number.

What has he learned by adding Larry to LawTrades?

“The community is the key to loyal user base and word-of-mouth virality. Learn early and learn often. Don’t worry about launching too early or too often. (And you need) less technology,” he said.

For errands, Alfred is supposed to take care of that for you. Alfred is an automatic, hands-off service. Unlike others who narrow down their service to one chore, Alfred does everything, as the name suggests. If you need laundry done or something delivered, Alfred takes care of it for you.

A Tech Disrupt winner last year, Alfred has come a long way from its first iteration, Google sheets, as it prepares to launch its app in a few months. To make Alfred work in the beginning, the team learned by “going to the streets and knowing their markets.”

Alfred offers errands like grocery shopping, laundry, dry cleaning, house cleaning, tailoring, and pharmacy special requests. Most common special request so far, handiwork, like TV installations. It claims to have vetted partners.

What kind of work is more common in some cities? In New York, it’s shoe repair and dry cleaning. In Boston, it’s grocery shopping. Alfred charges $15 a week.

Trey Sisson of Stefan’s Head spoke about Stefanshead, the first ever text message driven retail brand. Stefan texts you once in a while, offering products you’ve never seen before and can’t get anywhere else.

“We offer a text message list for limited-run apparel,” he said. You can also text Stefanshead to get deals.

Sisson said you just have to listen to the haters, because they can help you and in building its site, he said the moment “pivot enters your mind, pivot.”’s Amy is focused on perfecting scheduling technology

NEW YORK–In a multitasking world, founder Dennis Mortensen likes Amy to do one thing but to do it better than anyone else. That means better than humans, because she is an “it” or artificial intelligence, to be exact. Amy or is a personal assistant who schedules for you.

Mortensen was at the NUI meetup last April 20 to discuss where Amy is headed. Nope, she can’t help you google a trip for you yet, but she at least knows how to schedule for you, which Google Voice and Siri can’t do yet. This is perhaps the reason why some attendees are clearly big fans of Amy, because it can book an appointment for you by simply arranging meetings for you via email where other A.I.s can only do search.

We wonder if Amy can eventually recover lost notes, as this story had to rely on memory after the notes disappeared, but Mortensen plays it conservatively when questions persist about what Amy can also do. His recent secured funding, a

$9.2 million in a Series A financing led by FirstMark Capital, may just vanish too if tries to do too much. His budget can only do so much.

Right now, all Mortensen is saying as he further develops Amy is to get more data. That’s where the seed money is going, so he can see a fully emulated human scheduling negotiator in Amy.

A question was asked if Amy will have a dashboard or some visual interface—and Mortensen doesn’t mince words and says no. Right now, the process is simple. You cc Amy on first contact with an email respondent, then she does the rest of the follow-ups for you—you just sit back and let Amy do the worrying. By the way, the system does not require no sign-in, no password, no download, you just cc to when you’re planning a meeting with anyone.

Amy sends a meeting invite to you and your guest for the agreed upon time. The date, time and location will align to your personal preferences, your current free/busy time and the constraints set by your guest. You obviously don’t see any of this, you just receive the invite – and will be looking forward to a chat with your vendor, without having to deal with the two days of stress scheduling it.

Amy recently got good press from CBS News which said: “Amy feels like the future. Even more than Siri and Google Now, she seems seamlessly human and as effective as a real person doing the same task.”