Enabling faster, smaller, more energy efficient and reliable computer systems

acceleration is derivative of velocity over time

Why are you here?

  • Are you working on a compute-intensive algorithm (e.g. for artificial intelligence, computer vision, computational science, etc.) that should perform optimally while balancing multiple objectives (e.g. speed, power consumption, accuracy, reliability, cost)?
  • Do you need to optimize your solution to perform well across different inputs and hardware platforms (e.g. from IoT devices to data centers)?
  • Do you need to objectively compare several possible solutions to enable making smart decisions?
  • Or are you working on novel computing systems that should perform well on realistic emerging workloads?

How can we help?

Designing and optimizing computing solutions has become extremely challenging due to an exploding number of available choices and their interactions. Unfortunately, limited understanding of trade-offs, combined with the cost and time-to-market pressures, leads to few design and optimization choices being explored. This often results in over-provisioned (expensive) and under-performing (uncompetitive) products.

Over the past 10 years we have developed Collective Knowledge, a unique technology and scientific methodology using universal autotuning, optimization knowledge sharing and predictive analytics, to dramatically accelerate software and hardware co-design.

Our approach has been successfully validated in multiple academic and industrial projects, and received several international awards. We have helped our partners (including some Fortune 100 companies) achieve 2-20x performance increases, 30% energy reductions, 20% code size reductions, and automatic detection of software and hardware bugs for their business-critical use cases.

Our services

We use our exceptional scientific and engineering background, strong commitment and motivation, and unique Collective Knowledge approach to deliver results of highest value, on time.

We can help you:

  • automate performance analysis and benchmarking;
  • expose all design and optimization knobs from your software and hardware to make it automatically tunable and adaptive;
  • simplify optimizaion knowledge sharing across communities of hardware vendors and software developers;
  • perform multi-objective compile-time and run-time optimization (e.g. cost vs. performance vs. energy vs. accuracy) using practically any programming technologies including OpenCL, OpenMP, CUDA, MPI, and so on;
  • apply statistical ("machine learning") techniques (e.g. building performance models, identifying performance bottlenecks);
  • develop programming tools (e.g. compilers, profilers, highly optimized libraries);
  • tune optimization heuristics on representative workloads;
  • automatically stress-test compilers.

You can view a live demo of Collective Knowledge and public examples of our work at cknowledge.org.

Contact us!

Who are we?

Anton Lokhmotov, CEO

PhD (University of Cambridge)

Anton has been working in the area of programming languages and tools for 15 years, both as a researcher and engineer, primarily focusing on productivity, efficiency and portability of programming techniques for heterogeneous systems. Anton founded dividiti to pursue his vision of efficient and reliable computing everywhere.

In 2010-2015, Anton led development of GPU Compute programming technologies for the ARM Mali GPUs, including production (OpenCL, RenderScript) and research (EU-funded CARP project) compilers. He was actively involved in championing technology transfer, educating professional and academic developers, engaging with partners and customers, contributing to open-source projects and standardization efforts. In 2008-2009 he was a post-doctoral research associate at Imperial College London.

Grigori Fursin, CTO

PhD (University of Edinburgh)

Grigori has an interdisciplinary background in computer engineering, physics and predictive analytics with more than 20 years of R&D experience. He has pioneered systematic performance analysis, optimization and adaptation of computing systems based on statistical analysis, machine learning, automatic and crowdsourced tuning. Grigori co-founded dividiti to focus on transferring to industry his unique open-source Collective Knowledge technology consisting of a customizable repository of knowledge, plugin-based autotuning framework and web services for predictive analytics (statistical analysis, data mining, machine learning and feature selection).

Since 2007 when he joined INRIA as a tenured research scientist, Grigori lead several highly successful R&D projects including the EU-funded MILEPOST project that produced the world's first machine learning based production compiler (MILEPOST GCC). In 2010-2011, Grigori was on leave from INRIA invited to establish the Intel Exascale Lab in France while serving as the head of its program optimization and characterization group.

As founder of the nonprofit cTuning Foundation, Grigori is also leading a new artifact evaluation initiative for PPoPP and CGO, the premier ACM conferences on parallel programming and code generation, which aims to encourage sharing of code and data to enable reproducible systems research and engineering.