Handling M1 problems in Python local development using Docker

I’ve got a new laptop. One MacbookPro with M1 processor. The battery performance is impressive and the performance is very good but not all are good things. M1 processor has a different architecture. Now we’re using arm64 instead of x86_64.

The problem is when we need to compile. We need to take into account this. With python I normally use pyenv to manage different python version in my laptop and I create one virtualenv per project to isolate my environment. It worked like a charm, but now I’m facing problems due to the M1 architecture. For example to install a specific python version with pyenv I need to compile it. Also when I install a pip package and it provides a binary it must be available the M1 version.

This kind of problems are a bit frustrating. Apple provide us rosetta to use x86 binaries but a simple pip install xxx turns into a nightmare. For me, it’s not assumable. I want to deploy projects to production not become an expert in low level architectures. So, Docker is my friend.

My solution to avoid this kind of problems is Docker. Now I’m not using pyenv. If I need a python interpreter I build a Docker image. Instead of virtualenv I create a container.

PyCharm also allows me to use the docker interpreter without any problem.

That’s my python Dockerfile:

FROM python:3.9.6 AS base

ENV APP_HOME=/src
ENV APP_USER=appuser

RUN groupadd -r $APP_USER && \
    useradd -r -g $APP_USER -d $APP_HOME -s /sbin/nologin -c "Docker image user" $APP_USER

WORKDIR $APP_HOME

ENV TZ 'Europe/Madrid'
RUN echo $TZ > /etc/timezone && \
apt-get update && apt-get install --no-install-recommends \
    -y tzdata && \
    rm /etc/localtime && \
    ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
    dpkg-reconfigure -f noninteractive tzdata && \
    apt-get clean

RUN pip install --upgrade pip

FROM base

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

WORKDIR $APP_HOME

COPY requirements.txt .
RUN pip install -r requirements.txt

ADD src .

RUN chown -R $APP_USER:$APP_USER $APP_HOME

USER $APP_USER

I can build my container:

docker build -t demo .

Now I can add interpreter in pycharm using my demo:latest image

If I need to add a pip dependency i cannot do using pip install locally. I’ve two options: Add the dependency within requirements.txt and build again the image or run pip inside docker container ("with docker run -it –rm …"). To organize those script we can easily create a package.json file.

{
  "name": "flaskdemo",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "docker_local_build": "docker build -t $npm_package_name .",
    "freeze": "docker run -it --rm -v \"$PWD\"/src:/src $npm_package_name python -m pip freeze > requirements.txt",
    "local": "npm run docker_local_build && npm run freeze",
    "python": "docker run -it --rm -v $PWD/src:/src $npm_package_name:latest",
    "bash": "docker run -it --rm -v $PWD/src:/src $npm_package_name:latest bash"
  },
  "author": {
    "name": "Gonzalo Ayuso",
    "email": "gonzalo123@gmail.com",
    "url": "https://gonzalo123.com"
  },
  "license": "MIT"
}

Extra

There’s another problem with M1. Maybe you don’t need to face it but if you build a docker container with a M1 laptop and you try to deploy this container in linux server (not a arm64 server) your containers doesn’t work. To solve it you need to build your containers with the correct architecture. Docker allows us to do that. For example:

docker buildx build --platform linux/amd64 -t gonzalo123/demo:prodution .

https://github.com/gonzalo123/docker.py

Advertisement

Playing with lambda, serverless and Python

Couple of weeks ago I attended to serverless course. I’ve played with lambdas from time to time (basically when AWS forced me to use them) but without knowing exactly what I was doing. After this course I know how to work with the serverless framework and I understand better lambda world. Today I want to hack a little bit and create a simple Python service to obtain random numbers. Let’s start

We don’t need Flask to create lambdas but as I’m very comfortable with it so we’ll use it here.
Basically I follow the steps that I’ve read here.

from flask import Flask

app = Flask(__name__)


@app.route("/", methods=["GET"])
def hello():
    return "Hello from lambda"


if __name__ == '__main__':
    app.run()

And serverless yaml to configure the service

service: random

plugins:
  - serverless-wsgi
  - serverless-python-requirements
  - serverless-pseudo-parameters

custom:
  defaultRegion: eu-central-1
  defaultStage: dev
  wsgi:
    app: app.app
    packRequirements: false
  pythonRequirements:
    dockerizePip: non-linux

provider:
  name: aws
  runtime: python3.7
  region: ${opt:region, self:custom.defaultRegion}
  stage: ${opt:stage, self:custom.defaultStage}

functions:
  home:
    handler: wsgi_handler.handler
    events:
      - http: GET /

We’re going to use serverless plugins. We need to install them:

npx serverless plugin install -n serverless-wsgi
npx serverless plugin install -n serverless-python-requirements
npx serverless plugin install -n serverless-pseudo-parameters

And that’s all. Our “Hello world” lambda service with Python and Flask is up and running.
Now We’re going to create a “more complex” service. We’re going to return a random number with random.randint function.
randint requires two parameters: start, end. We’re going to pass the end parameter to our service. The start value will be parameterized. I’ll parameterize it only because I want to play with AWS’s Parameter Store (SSM). It’s just an excuse.

Let’s start with the service:

from random import randint
from flask import Flask, jsonify
import boto3
from ssm_parameter_store import SSMParameterStore

import os
from dotenv import load_dotenv

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

app = Flask(__name__)

app.config.update(
    STORE=SSMParameterStore(
        prefix="{}/{}".format(os.environ.get('ssm_prefix'), os.environ.get('stage')),
        ssm_client=boto3.client('ssm', region_name=os.environ.get('region')),
        ttl=int(os.environ.get('ssm_ttl'))
    )
)


@app.route("/", methods=["GET"])
def hello():
    return "Hello from lambda"


@app.route("/random/<int:to_int>", methods=["GET"])
def get_random_quote(to_int):
    from_int = app.config['STORE']['from_int']
    return jsonify(randint(from_int, to_int))


if __name__ == '__main__':
    app.run()

Now the serverless configuration. I can use only one function, handling all routes and letting Flask do the job.

functions:
  app:
    handler: wsgi_handler.handler
    events:
      - http: ANY /
      - http: 'ANY {proxy+}'

But in this example I want to create two different functions. Only for fun (and to use different role statements and different logs in cloudwatch).

service: random

plugins:
  - serverless-wsgi
  - serverless-python-requirements
  - serverless-pseudo-parameters
  - serverless-iam-roles-per-function

custom:
  defaultRegion: eu-central-1
  defaultStage: dev
  wsgi:
    app: app.app
    packRequirements: false
  pythonRequirements:
    dockerizePip: non-linux

provider:
  name: aws
  runtime: python3.7
  region: ${opt:region, self:custom.defaultRegion}
  stage: ${opt:stage, self:custom.defaultStage}
  memorySize: 128
  environment:
    region: ${self:provider.region}
    stage: ${self:provider.stage}

functions:
  app:
    handler: wsgi_handler.handler
    events:
      - http: ANY /
      - http: 'ANY {proxy+}'
    iamRoleStatements:
      - Effect: Allow
        Action: ssm:DescribeParameters
        Resource: arn:aws:ssm:${self:provider.region}:#{AWS::AccountId}:*
      - Effect: Allow
        Action: ssm:GetParameter
        Resource: arn:aws:ssm:${self:provider.region}:#{AWS::AccountId}:parameter/random/*
  home:
    handler: wsgi_handler.handler
    events:
      - http: GET /

And that’s all. “npx serverless deploy” and my random generator is running.

Web console with node.js

Continuing with my experiments of node.js, this time I want to create a Web console. The idea is simple. I want to send a few command to the server and I display the output inside the browser. I can do it entirely with PHP but I want to send the output to the browser as fast as they appear without waiting for the end of the command. OK we can do it flushing the output in the server but this solution normally crashes if we keep the application open for a long time. WebSockets again to the rescue. If we need a cross-browser implementation we need the socket.io library. Let’s start:

The node server is a simple websocket server. In this example we will launch each command with spawn function (require(‘child_process’).spawn) and send the output within the websoket. Simple and pretty straightforward.

var sys   = require('sys'),
http  = require('http'),
url   = require('url'),
spawn = require('child_process').spawn,
ws    = require('./ws.js');

var availableComands = ['ls', 'ps', 'uptime', 'tail', 'cat'];
ws.createServer(function(websocket) {
    websocket.on('connect', function(resource) {
        var parsedUrl = url.parse(resource, true);
        var rawCmd = parsedUrl.query.cmd;
        var cmd = rawCmd.split(" ");
        if (cmd[0] == 'help') {
            websocket.write("Available comands: \n");
            for (i=0;i<availableComands.length;i++) {
                websocket.write(availableComands[i]);
                if (i< availableComands.length - 1) {
                    websocket.write(", ");
                }
            }
            websocket.write("\n");

            websocket.end();
        } else if (availableComands.indexOf(cmd[0]) >= 0) {
            if (cmd.length > 1) {
                options = cmd.slice(1);
            } else {
                options = [];
            }
            
            try {
                var process = spawn(cmd[0], options);
            } catch(err) {
                console.log(err);
                websocket.write("ERROR");
            }

            websocket.on('end', function() {
                process.kill();
            });

            process.stdout.on('data', function(data) {
                websocket.write(data);
            });

            process.stdout.on('end', function() {
                websocket.end();
            });
        } else {
             websocket.write("Comand not available. Type help for available comands\n");
             websocket.end();
        }
    });
  
}).listen(8880, '127.0.0.1');

The web client is similar than the example of my previous post

var timeout = 5000;
var wsServer = '127.0.0.1:8880';

var ws;


function cleanString(string) {
    return string.replace(/&/g,"&amp;").replace(/</g,"&lt;").replace(/>/g,"&gt;");
}


function pad(n) {
    return ("0" + n).slice(-2);
}

var cmdHistory = [];
function send(msg) {
    if (msg == 'clear') {
        $('#log').html('');
        return;
    }
    try {
        ws = new WebSocket('ws://' + wsServer + '?cmd=' + msg);
        $('#toolbar').css('background', '#933');
        $('#socketStatus').html("working ... [<a href='#' onClick='quit()'>X</a>]");
        cmdHistory.push(msg);
        $('#log').append("<div class='cmd'>" + msg + "</div>");
        console.log("startWs:");
    } catch (err) {
        console.log(err);
        setTimeout(startWs, timeout);
    }

    ws.onmessage = function(event) {
        $('#log').append(util.toStaticHTML(event.data));
        window.scrollBy(0, 100000000000000000);
    };

    ws.onclose = function(){
        //console.log("ws.onclose");
        $('#toolbar').css('background', '#65A33F');
        $('#socketStatus').html('Type your comand:');
    }
}

function quit() {
    ws.close();
    window.scrollBy(0, 100000000000000000);
}
util = {
  urlRE: /https?:\/\/([-\w\.]+)+(:\d+)?(\/([^\s]*(\?\S+)?)?)?/g, 

  //  html sanitizer 
  toStaticHTML: function(inputHtml) {
    inputHtml = inputHtml.toString();
    return inputHtml.replace(/&/g, "&amp;")
                    .replace(/</g, "&lt;")
                    .replace("/n", "<br/>")
                    .replace(/>/g, "&gt;");
  }, 

  //pads n with zeros on the left,
  //digits is minimum length of output
  //zeroPad(3, 5); returns "005"
  //zeroPad(2, 500); returns "500"
  zeroPad: function (digits, n) {
    n = n.toString();
    while (n.length < digits) 
      n = '0' + n;
    return n;
  },

  //it is almost 8 o'clock PM here
  //timeString(new Date); returns "19:49"
  timeString: function (date) {
    var minutes = date.getMinutes().toString();
    var hours = date.getHours().toString();
    return this.zeroPad(2, hours) + ":" + this.zeroPad(2, minutes);
  },

  //does the argument only contain whitespace?
  isBlank: function(text) {
    var blank = /^\s*$/;
    return (text.match(blank) !== null);
  }
};
$(document).ready(function() {
  //submit new messages when the user hits enter if the message isnt blank
  $("#entry").keypress(function (e) {
    console.log(e);
    if (e.keyCode != 13 /* Return */) return;
    var msg = $("#entry").attr("value").replace("\n", "");
    if (!util.isBlank(msg)) send(msg);
    $("#entry").attr("value", ""); // clear the entry field.
  });
});

And that’s all. In fact we don’t need any line of PHP to perform this web console. Last year I tried to do something similar with PHP but it was a big mess. With node those kind of jobs are trivial. I don’t know if node.js is the future or is just another hype, but it’s easy. And cool. Really cool.

You can see the full code at Github here. Anyway you must take care if you run this application on your host. You are letting user to execute raw unix commands. A bit of security layer would be necessary.