L'EQ Score indique l'activité et l'attractivité de l'entreprise sur la plateforme : complétude du profil, actualités, likes, follows, appartenance à un ou plusieurs labels ou communautés, nombre de membres de son réseau, etc.
APPENTRA is a Deep Tech software startup company creating DevOps tools for the key stages of software development in parallel computing.
Coruña, A, Galicia, Spain
Send a message
Jul 23, 2020 · 7 min read
It was 2003 and in the computer engineering arena, making computer programs run faster, was still mostly driven by using more performant generic processors (Central Processing Units, or CPUs). In simple terms, computer programs are made of instructions which, up until then, were mainly executed sequentially in cycles of a processor, so the higher the processor frequency clock (more cycles per unit of time), the faster the program would run.
But a new wave was about to break … by further increasing the processor frequency clock in a microchip, dealing with variables such as power consumption and heat generation became an unmanageable task on the design of those ultra-performant chips. Some chip manufacturers, like Intel, facing some of those challenges, decided to rethink the hardware approach, moving from a single processing unit (single-core) to multiple processing units (multiple cores) in the same chip.
The following chart, from the computational scientist Karl Rupp of TU Wien, illustrates that shift by merging several microprocessor parameters data for more than 40 years.
As you can see, clock frequency (in green) has reached a plateau, while the number of logical cores (in black) has been increasing. So, the new approach was no longer to increase the “power” of a single processor to perform more cycles in a given timeframe (thus executing more sequential instructions), but rather, to split instructions (workloads) to different processing units working in parallel at the same time.
This was definitely a move that the industry fully adopted, with chip manufacturers progressively deploying HW architectures with an increasing number of processing units (cores) and of different types (from generic CPUs to specialized units like Graphical Processing Units, or GPUs). Quoting Karl Rupp “… if you want to benefit from future processors over what you have now, make sure to have parallel workloads.”
And here we are now, in June 2020, with highly available distributed computing architectures (with multiple cores and of different types) deployed in both huge high performance computing centers (like the 500 world supercomputers) and in portable embedded systems. One could think computer scientists are already taking full advantage of those architectures… But that’s not really the case! Well, a first layer has already been working for some time through the computer Operating System that is able to manage concurrent applications: we can listen to music through a webapp and, simultaneously, work on our favorite productivity tool. But the market seems to be asking for much more in terms of the runtime and performance of applications, with a few examples being:
Additionally, note how the Enterprise High Performance Computing market, analyzed by Tractica in May 2018, is expected to grow significantly at a CAGR of +29% achieving, by 2025, a market value +$31b. This is probably going to be fueled by AI-driven applications, with a specific example being embedded real-time decision systems.
So, several data points seem to suggest that computer scientists need to go a step further in terms of taking advantage of the new distributed architectures to be able to cope with ever increasing market requirements. One of the most powerful tools at their hands is doing code parallelization inside each application (and not only doing concurrency between applications), so that applications’ instructions can be executed in parallel in the different processing units, truly taking advantage of the newly deployed architectures.
But this is, to a large extent, a very complex task … the reason being:
Existing approaches to deal with code parallelization inside an application rely mostly on two classes of tools, both of them with several intrinsic limitations:
This is where a Spanish deep tech company, Appentra, comes into play. Appentra is a spin-off of the University of Coruña, leveraging more than 10 years of R&D on parallel computing led by the company co-founder and CEO, Prof. Manuel Arenaz. The company has developed a unique technology, Appentra’s Parallelware, that is able to generate new versions of real-life applications using parallel code, enabling them to run faster and meet the required business goals, when deployed in distributed HW architectures.
In a nutshell, as illustrated in the chart below, Appentra works as a static code analysis tool that ingests both sequential code or already parallelized code and:
One important structural element is that Appentra is an evolutive technology: (i) it currently handles an initial set of potentially parallelizable code patterns, but the AI-engine is already learning new mutations and additional code patterns, progressively enriching Appentra’s knowledge base; (ii) it currently works with an initial set of working programming languages (C and C++), yet the underlying compiler infrastructure used by Appentra has the potential to deal with many other input programming languages, given that it is based on an abstract code representation scheme.
Let’s see now how Appentra is able to address the key market challenges discussed above:
In summary and through a business lens, Appentra brings to the enterprise market and to the high performance computing industry:
Several years have been invested in: (i) developing Appentra’s underlying technology; (ii) converting that technology into 2 commercial products; (iii) acquiring initial market validation and traction … yet Appentra’s journey is just about to begin … and we, at Armilar, are thrilled to have the opportunity to be part of it!
Article wrote by João Dias, Principal at Armilar Venture Partners
Start in 01/01/1998
More than 10 years of R&D
Reference sales in HPC Market: ORNL,KAUST,NERCS. Completed the technical team. Investment of 1.8M.
Foundation of Appentra. Initial VC Investments. EU Grants. Target HPC market First Trial version.
Sucessful pilot projects with big corporations. Convert pilots into recurring sales
New round of investment for company growth
to be continued ...