Tech News

How does AI affect Programming and Software and Needs?

0
How does AI affect Programming and Software and Needs?
How does AI affect Programming and Software and Needs?

Artificial intelligence is becoming ubiquitous, and artificial intelligence loads are also running on the world’s fastest computers, which is also changing HPC (High Performance Computing). However, how does artificial intelligence affect programming, hardware and software, and training needs?

The author of this paper believes that AI may be the biggest change driver in HPC history. As for why, he gave the top ten reasons why AI had the greatest impact on HPC in 2019.

30 years of experience in the chip industry: Why AI is in the history of high performance computing

10, Tensors (tensor): the lingua franca of artificial intelligence calculation

The use of vector algebra has spawned computers designed for vector computing. The early supercomputer from Cray was a vector supercomputer that drove the application to represent vector and matrix algebra problems, which in turn pushed the design of the computer to ensure that vector calculations run faster. This cycle has defined HPC for many years.

Tensor algebra can be thought of as a generalized matrix algebra, so it is a natural evolution of supercomputer power, not a revolution. Any machine that supports matrix operations can perform tensor operations. Today, CPUs support vector and tensor high-performance computing with universal compilers, accelerated Pythons, and enhanced library and optimization framework support.

Just as the influence of the vector on HPC hardware, software and ideas before, the tensor is also profoundly changing us.

9, language: high-level programming language

The Fortran programming language dominates the HPC space, and the C and C++ languages ​​dominate the HPC market. The accelerator is usually supported by a C language interface extension. Trying to break the existing landscape with new languages ​​has failed because existing languages ​​have formed an ecosystem, including HPC applications, users, code, and more.

AI brings new requirements that will extend the language associated with HPC. They won’t change the activities of most physicists who use Fortran, but data scientists using MATLAB and Python need to tailor solutions to their needs.

Python and other frameworks and programming languages ​​seem to be becoming an increasingly important part of HPC. However, the programs they actually run will still be written in C/C++/Fortran, but AI programmers will neither know nor care about it.

8. Think differently: replace the legacy code with a rethinking approach

HPC is very traditional, and artificial intelligence is relatively new. For now, when the two interact, it will revisit the issue of implementing legacy code, which in some cases may have already been implemented. The argument may be “Let’s add some artificial intelligence to this code,” but the reality is that efforts can be a waste of time. Remember the early “conversion to Java” effort in the early days of the Java boom?

Like the early crazy Java era, people eager to rewrite code into new forms have both success and failure. Return on investment (ROI) will be key, but predicting the outcome of innovation is often wrong.

7. Portability and security: virtualization and containers

The specific question of security and portability is, “Can I run safely on my machine?” and “Can it run on my machine?” This is a problem that virtualization and containers are trying to solve. Of course, security comes from good hardware and software features. For many people, virtualization and containers seem to establish this combination.

Containers have attracted the attention of many developers because they are more flexible, deployable, scalable, cloud-versatile than virtual machines, and can save virtual machine licensing costs.

Talking about containers at any HPC or AI meeting seems to only stand up and talk. But this is changing, for example, Python and Julia can be better extended when configured, and containers can help with deployment.

Containers provide a good environment for users, and more and more containers will be seen in the HPC field in 2019, in part because of the interest shown by AI users. There is no doubt that this will challenge HPC because it requires an optimized ecosystem. Nowadays, this field is doing a lot of fine work in this area, and the HPC community will help achieve this goal and satisfy everyone’s desire for containers.

6, the scale problem: big data

As long as there is artificial intelligence, there is big data. The focus of artificial intelligence is to use data models to find value from a large data set. Many HPC centers already have a lot of infrastructure to handle big data problems well.

Big data is a major requirement for new systems in all HPC centers, and AI workloads are the main driver of big data demand.

Due to the high cost of memory, we have seen that the ratio of memory size to FLOP/s has been declining for many years. This is not good for big data development. New features related to persistent memory bring some hope and support for big data models in large machines, including HPC. These new memory technologies provide an extension of main memory and local storage (SSD).

What I am writing today is how artificial intelligence affects HPC, but I have to point out that HPC’s love of visualization will have an impact on AI. Placing the data closest to the processor is the most suitable processor for visualizing the actual data and is one of the most important ways HPC affects AI/ML. Of course, the use and understanding of big data and the visualization of data and analysis are intertwined.

5, a lot of calculations: cloud computing

Artificial intelligence developers may have accepted cloud computing more than HPC developers. While HPC has “appeared in the cloud”, the high-performance computing needs of AI applications will accelerate “HPC in the cloud.”

4, hardware: interactive capabilities to provide performance for libraries and frameworks

The amount of artificial intelligence is not large. This means that a few library interfaces and frameworks dominate the “AI Accelerator” as their selling point.

Interactivity is a long-standing requirement that has been “shelved” by the HPC system and is now placed in the “front end and center” by AI programmers. The speed at which this change will change the “HPC” remains to be seen, but in 2019 innovations in the field will be noticeable even if they are scattered and somewhat secret. Interactivity can also be called “personalization.”

HPC has more hardware versatility, interactivity support, and additional library/framework abstractions optimized for performance to support AI workloads. The HPC community’s focus on performance will help illustrate that more convergence of infrastructure will benefit data center deployment. No one wants to give up performance, as long as they don’t have to do this, the HPC community’s expertise will help commercialize AI/ML performance, driving more hardware technology convergence between communities.

3. Staff integration: user diversity and increased interest in HPC

AI will attract many new talents with different backgrounds. AI will democratize HPC on an unprecedented scale. In the past few years, “HPC Democratization” has been used to describe how HPC (previously only large organizations can use it) is used by small groups of engineers and scientists. Mathematical and physical issues may have driven early supercomputing, but more recently users have found HPC performance to be indispensable in areas such as medicine, weather forecasting, and risk management.

AI brings a broader user base than HPC, bringing new applications to the democratization of HPC. Adding AI to the list of developing HPCs, we continue to add more reasons to the pursuit of the world’s highest performance computing, HPC experts and AI experts are combining to generate the excitement we can all feel.

2. New investment: reasoning

Machine learning can often be thought of as consisting of a “training” learning phase and a “doing” “doing” phase. It seems that we need more loops to reason rather than more loops to train, especially when we see machine learning ubiquitously embedded into the solution around us. Market analysts estimate that the reasoning hardware market is 5-10 times the size of training hardware.

With such a large market opportunity, it is no surprise that everyone wants to enter the market for a larger reasoning market. Reasoning has been run on FPGAs, GPUs, DSPs, and many custom ASIC processors. Power consumption, latency and overall cost are all selling points. High-performance, low-latency, easy-to-reprogram FPGAs seem to be a reasonable alternative to the current CPU-driven reasoning market, as time will tell.

Following the market choice, you will see that the inference workload will have a significant impact on all calculations, including HPC.

1. Application fusion: instead of replacing after “rethinking”, “fusion” is the best of both worlds, expanding workload diversity and seeing the convergence of different workloads.

Those with vision have proven that there are many opportunities for HPC and AI to combine. Inspiring research ranges from having a neutral network to “like Monte Carlo simulations” with very good results, requiring only a small portion of the computational needs; integrating the system into models that predict extreme weather, such as hurricanes, Or weather forecast system. Generating Confrontation Network (GAN) is a type of machine learning system that many people value very much. GAN definitely helps to integrate HPC and AI / ML.

Although few applications now combine HPC and AI technologies, based on early results, it is easy to predict that this is the future of HPC applications, and that AI will bring the biggest changes to HPC.

Understand these ten forces

The calculation does not change in a sense: it depends entirely on the role of the entire system to the user. Although the requirements change, a complete system consists of hardware and software components that do not change. In fact, it’s easy to be distracted by a single technology (hardware or software); the best system will carefully apply the latest technology, which I highly prefer to call “selective acceleration,” emphasizing the use of acceleration when important. When I use Python a lot, I like Python acceleration (a CPU-dependent software technology). When I need low latency reasoning, I like FPGA acceleration. When I only need a little acceleration, I don’t use any one. This is the art of building a balanced system. The top ten list does not break the balance of reality that provides the best overall effect for multi-purpose machines.

Conclusion: AI will use HPC, which will change HPC forever

Obviously AI will use HPC, which will change HPC forever. In fact, AI is probably the biggest change driver in HPC history. HPC continues to advance with the development of technology, and the workload will also change with the development of artificial intelligence. I don’t think the debate converges and crosses the concept of giving enough trust that artificial intelligence users will join the HPC community and leave their mark. They will also use non-HPC systems, just like other HPC users.

There will be custom high-performance machines designed and built specifically for AI workloads, and AI workloads from other machines can also run on more versatile high-performance devices. To balance the high performance and flexibility of the machine, you can accelerate. In all cases, artificial intelligence will help define what is super computing in the future, which will change HPC forever.

Shivam Singh
Founder of the TechGrits, has always looked at technology as a piece of knots. From an early age connected to the technological world, this is literally your dream job.

PUBG developer Brendan Greene: E-sports is the future direction of the Game

Previous article

Facebook COO: Researching restricted live streaming is only available to specific Users

Next article

You may also like

More in Tech News