Recent improvements in ML capabilities allow addressing a greater range of semiconductor test problems with higher confidence levels, moving ML models into the toolbox for Test and Product Engineers. Different ML models, of course, perform different functions. Some examples include: * Regression Models for curve fitting , used for reducing the number of measurements required to optimize register settings for minimum operating voltage (Vmin) test, and adjusting search ranges and step sizes for faster calibration of various analog functions. ML models can do multi-dimensional regression on many parameters at once. * Outlier (or “Novelty”) Detection Models can be used for Part Average Testing (PAT) and Dynamic-PAT, which means using the statistics of each lot or wafer to tighten the test limits for high reliability testing, such as automotive, medical, or military devices. It may also be used for detecting a process shift across wafers or lots, and even detecting DIB, probecard or socket issues during test. * Classification Models can be used for things like Speed Binning digital devices, and for automatically classifying Wafer Test Defect Maps and Schmoo Plots, as well as enabling or disabling tests based upon test results earlier in the test flow (i.e. Adaptive Test). Customers want to implement ML in test in order to reduce cost of test by reducing test time through adaptive test and predictive calibration. But since ML calculations are often very compute-intensive, running their models on the tester host computer would be counter-productive. They need a well-integrated parallel processor with software support in the host programming language – IG-XL for now, MST and others to come. They also need high IP security for both their ML models and their data. Built-in multi-site ML support such as python, scikit-learn, and Docker containers facilitate ease of use. Containers are lightweight alternatives to Virtual Machines (VMs). While VMs emulate an entire operating system (OS), containers share the host machine’s Kernal, resulting in smaller file sizes, faster startup and operation, while still offering process-level isolation and portability. On the UltraEdge, containers allow the user to use any desired programming language (or specific version thereof) that will run on a Linux core, bundled with their program and all of its dependencies, eliminating version conflicts when deployed to production sites around the world. Come learn what Docker containers and images are, how to develop and build them for your production Machine Learning application, and how to download and use them on the UltraEdge for a seamless production deployment.