The Talkbox project

On-device Voice-based AI

For your commercial and business applications.

Learn More
...

On-device, works offline.

The Talkbox layer allows the state-of-the-art neural networks (NLP, Vision, AGI) to run locally on your device (smartphone, tablet, watch). This means ultra-low latency and 100% uptime.

...

Low cloud costs.

GPU-based Cloud can become very expensive very quickly. Since all of our AI runs locally on the device, there are negligible on-going cloud costs. Feedback and related data could optionally be aggregated to push a refined model downstream every update cycle (such as a few weeks/months).

...

Privacy and Security

On-device means your data always stays on the device. This offers the maximum possibly privacy for the users. Also from a cybersecurity perspective, this offers the lowest attack-surface and hence more secure than when the neural networks run on the cloud. Optionally, some aggregate data could be transferred to the cloud to fine-tune the neural netowrks or for analytics. This is fully configurable and optional.