We are excited to announce new tools and improvements that enable customers to reduce the time from days to hours to deploy machine learning (ML) models including foundation models (FMs) on Amazon SageMaker for Inference at scale . This includes a new Python SDK library that simplifies the process of packaging and deploying a ML model on SageMaker from seven steps to one with an option to do local inference. Further, Amazon SageMaker is offering new interactive UI experiences in Amazon SageMaker Studio that will help customers quickly deploy their trained ML model or FMs using performant and cost-optimized configurations in as few as three clicks.
With the new Amazon SageMaker Python SDK library, customers can take any framework model artifacts or public FMs and easily convert them into a deployable ML model with just a single function call. In addition, customers can locally validate, optimize, and deploy ML models to Amazon SageMaker in few minutes from their local IDEs or notebooks. The new interactive experiences in SageMaker Studio will let customers easily create a deployable ML model by selecting a framework version of their choice and uploading pre-trained model artifacts. Further customers can select one or more of their deployable ML models or FMs and deploy them with just couple of clicks.
For more information about the AWS Regions where Amazon SageMaker Inference is available, see the AWS Region table.