In the previous post of our Serverless Machine Learning mini-series, we discussed how to develop and package a predictive model using scikit-learn. This was to prepare a model that could be used in the future to deploy to AWS Lambda, allowing a developer to call your serverless machine learning model like an API end-point without the need to design and implement an entire API. This can save your team not only hours of development time but also cloud costs – a benefit that’s one of the major drivers behind AWS Lambda.

In order to deploy this model, you’ll need to do several things to get started. Since you’ll be deploying without a server, you need to set up a container, which means you need to set up a Docker config file. In addition, you’ll have to test out the function you developed in the previous post.

If you recall, you developed a function that was set up locally but hadn’t yet been deployed to AWS. But that’s not all that’s required to deploy your machine learning model. This post discusses the final steps required to do so.

Testing Locally

Before you deploy your model to AWS, it’s best to test it locally. 

This is a good practice for unit testing because it‘s easier to spot issues in your code and fix them prior to deployment rather than having to deal with issues later that could be due to how you set up your Lambda function in the first place.

This way, if you then deploy your code and there are still errors, you can be confident that it is driven by your configuration or some other dependency, not your code. When developing anything, even non-serverless functions, having this step-by-step testing process allows you to quickly figure out where there are bugs. So make sure you don’t skip it!

You can use this quick Python script to locally test the function:

def main():
    event={'queryStringParameters':
        {
        'Kms_Driven':.9,
        'Fuel_Type':.5,
        'Year':.8,
        'Seller_Type':.5,
        'Transmission':.3,
        'Owner':0
        }
    }
    print('test')
    response =predict(event,None)
    body = json.loads(response['body'])
    print('output', body['predictPrice'])
    with open('event.json','w') as event_file:
        event_file.write(json.dumps(event))
main()

Looking at it locally, you can just pass the values you want to test and see what response you get. 

serverless machine learning model test output

Figure 1: Output from your serverless model test (Source: Jupyter Notebook output)

As you can see, you receive a status “200” output, which typically means your HTTP request is returning correctly. 

Deploying Your Serverless Machine Learning Model

With your model running locally, you can now be confident that if there are any problems with your serverless model, they’re probably due to another component. 

Speaking of other components, now you can start working on your Docker pieces. You can begin with the .yml file, which will help you configure the function and easily deploy the model by using the sls command. To do so, you first need to install serverless tools using the commands below:

npm install -g serverless
npm i -D serverless-dotenv-plugin
sls plugin install -n serverless-python-requirements
# Deploy function to aws lambda

Once you have installed the serverless command, you can set up your .yml file.

In brief, you’ll need to provide context to the Lambda function in order for it to operate. For example, you will need to dictate the name of your function by creating a function parameter: 

function:
    function_name:
        events:
            method: get

This is just part of the .yml file and simply prepares the function name for your serverless machine learning model. You’ll also need to provide information like the service you’ll be using and your credentials for AWS (unless you provide them from somewhere else); you can even set up memory-size and time-out specifications.

The total file will look like this:

service: lambda
provider:
  name: aws
  credentials:
    accessKeyId: YOUR KEY HERE
    secretAccessKey: YOUR SECRET HERE
  runtime: python3.6
  stage: dev
  region: us-east-2
functions:
  predict-price:
    handler: handler.predict
    memorySize: 512
    timeout: 30
    events:
      - http:
          path: get-price
          method: get
          request:
            parameters:
              queryStrings:
                Kms_Driven: true
                Fuel_Type: true
                Year: true
                Seller_Type: true
                Transmission: true
                Owner: true
plugins:
  - serverless-python-requirements
custom:
  pythonRequirements:
    dockerizePip: true
  slim: true

This file will help set up your serverless machine learning model and has a section for the parameters you’ll be taking in as well as some other configurable values.

With this setup, you can also call out your requirements.txt file.

In this example, you only need the sklearn library. So set your sklearn version, and save it as requirements.txt in the same folder.

With everything configured, you can now start to deploy your function via the command:

sudo sls deploy

This will in turn deploy your function onto Lambda, and you can then start interacting with it. 

Although you have deployed your model, you need to work on calling it; one way you can do this is by using the command below:

sudo sls invoke --function predict-price --path event.json
serverless machine learning model output

Figure 2: Output from your serverless machine learning model

As you can see, you get your expected output.

You have now deployed your first machine learning model to AWS Lambda and can find it right on your AWS management console. It should look something like this:

Figure 3: AWS Lambda GUI for your function after deployment (Source: AWS Management Console)

This is on the AWS management console, which allows you to track logs and calls to your function so you can manage usage–another huge benefit of using a serverless setup. Instead of having to spend time developing all of this tracking and logging infrastructure, it’s all included right in your AWS model.

Conclusion

Deploying machine learning models using traditional methods requires a machine learning engineer as well as either a DevOps team or software engineer to help develop the API service to run the model. This could take weeks or months depending on the availability of your team’s engineers, slowing down your machine learning progress. And this doesn’t even include all the tracking and logging options AWS offers.

However, with serverless computing, deploying your machine learning model is much simpler. With just a few commands, you can have your entire model running on AWS Lambda and ready for inputs. This way, you can focus more on developing your serverless machine learning model instead of wasting time learning how to even deploy it in the first place.

Read More:

Observability Takes Too Much Developer Time, So Automate It

Container Image Support in AWS Lambda

Monitoring and Tracing GraphQL AWS AppSync Applications

AWS Tools Series: Amazon Aurora Database