OpenMined / KotlinSyft

The official Syft worker for secure on-device machine learning
https://www.openmined.org
Apache License 2.0
85 stars 28 forks source link
android-library deep-learning federated-learning kotlin security syft

KotlinSyft-logo

License ![Tests](<https://img.shields.io/github/workflow/status/OpenMined/KotlinSyft/Testing Workflow?label=tests>) build Coverage OpenCollective Chat on Slack Download

All Contributors

KotlinSyft

KotlinSyft makes it easy for you to train and inference PySyft models on Android devices. This allows you to utilize training data located directly on the device itself, bypassing the need to send a user's data to a central server. This is known as federated learning.

There are a variety of additional privacy-preserving protections that may be applied, including differential privacy, muliti-party computation, and secure aggregation.

OpenMined set out to build the world's first open-source ecosystem for federated learning on web and mobile. KotlinSyft is a part of this ecosystem, responsible for bringing secure federated learning to Android devices. You may also train models on iOS devices using SwiftSyft or in web browsers using syft.js.

If you want to know how scalable federated systems are built, Towards Federated Learning at Scale is a fantastic introduction!

Installation

KotlinSyft is available on maven and jcenter. To add the library as a dependency in your android project use one of the following methods:

  1. Maven snippet:
<dependency>
  <groupId>org.openmined.kotlinsyft</groupId>
  <artifactId>syft</artifactId>
  <version>0.1.3</version>
  <type>pom</type>
</dependency>
  1. Gradle dependency:
implementation 'org.openmined.kotlinsyft:syft:0.1.3'

Quick Start

As a developer, there are few steps to building your own secure federated learning system upon the OpenMined infrastructure:

  1. :robot: Generate your secure ML model using PySyft. By design, PySyft is built upon PyTorch and TensorFlow so you don't need to learn a new ML framework. You will also need to write a training plan (training code the worker runs) and an averaging plan (code that PyGrid runs to average the model diff).
  2. :earth_americas: Host your model and plans on PyGrid which will deal with all the federated learning components of your pipeline. You will need to set up a PyGrid server somewhere, please see their installation instructions on how to do this.
  3. :tada: Start training on the device!

:notebook: The entire workflow and process is described in greater detail in our project roadmap.

You can use KotlinSyft as a front-end or as a background service. The following is a quick start example usage:

    val userId = "my Id"

    // Optional: Make an http request to your server to get an authentication token
    val authToken = apiClient.requestToken("https://www.mywebsite.com/request-token/$userId")

    // The config defines all the adjustable properties of the syft worker
    // The url entered here cannot define connection protocol like https/wss since the worker allots them by its own
    // `this` supplies the context. It can be an activity context, a service context, or an application context.
    val config = SyftConfiguration.builder(this, "www.mypygrid-url.com").build()

    // Initiate Syft worker to handle all your jobs
    val syftWorker = Syft.getInstance(authToken, configuration)

    // Create a new Job
    val newJob = syftWorker.newJob("mnist", "1.0.0")

    // Define training procedure for the job
    val jobStatusSubscriber = object : JobStatusSubscriber() {
        override fun onReady(
            model: SyftModel,
            plans: ConcurrentHashMap<String, Plan>,
            clientConfig: ClientConfig
        ) {
            // This function is called when KotlinSyft has downloaded the plans and protocols from PyGrid
            // You are ready to train your model on your data
            // param model stores the model weights given by PyGrid
            // param plans is a HashMap of all the planIDs and their plans.
            // ClientConfig has hyper parameters like batchsize, learning rate, number of steps, etc

            // Plans are accessible by their plan Id used while hosting it on PyGrid.
            // eventually you would be able to use plan name here
            val plan = plans["plan name"]

            repeat(clientConfig.properties.maxUpdates) { step ->

                // get relevant hyperparams from ClientConfig.planArgs
                // All the planArgs will be string and it is upon the user to deserialize them into correct type
                val batchSize = (clientConfig.planArgs["batch_size"]
                                 ?: error("batch_size doesn't exist")).toInt()
                val batchIValue = IValue.from(
                    Tensor.fromBlob(longArrayOf(batchSize.toLong()), longArrayOf(1))
                )
                val lr = IValue.from(
                    Tensor.fromBlob(
                        floatArrayOf(
                            (clientConfig.planArgs["lr"] ?: error("lr doesn't exist")).toFloat()
                        ),
                        longArrayOf(1)
                    )
                )
                // your custom implementation to read a databatch from your data
                val batchData = dataRepository.loadDataBatch(clientConfig.batchSize)
                //get Model weights and return if not set already
                val modelParams = model.getParamArray() ?: return
                val paramIValue = IValue.listFrom(*modelParams)
                // plan.execute runs a single gradient step and returns the output as PyTorch IValue
                val output = plan.execute(
                    batchData.first,
                    batchData.second,
                    batchIValue,
                    lr,paramIValue
                )?.toTuple()
                // The output is a tuple with outputs defined by the pysyft plan along with all the model params
                output?.let { outputResult ->
                    val paramSize = model.modelState!!.syftTensors.size
                    // The model params are always appended at the end of the output tuple
                    val beginIndex = outputResult.size - paramSize
                    val updatedParams =
                            outputResult.slice(beginIndex until outputResult.size)
                    // update your model. You can perform any arbitrary computation and checkpoint creation with these model weights
                    model.updateModel(updatedParams.map { it.toTensor() })
                    // get the required loss, accuracy, etc values just like you do in Pytorch Android
                    val accuracy = outputResult[0].toTensor().dataAsFloatArray.last()
                }
            }
            // Once training finishes generate the model diff
            val diff = mnistJob.createDiff()
            // Report the diff to PyGrid and finish the cycle
            mnistJob.report(diff)
        }

        override fun onRejected() {
        // Implement this function to define what your worker will do when your worker is rejected from the cycle
        }

        override fun onError(throwable: Throwable) {
        // Implement this function to handle error during job execution
        }
    }

    // Start your job
    newJob.start(jobStatusSubscriber)

    // Voila! You are done.

Running the Demo App

The demo app fetches the plans, protocols and model weights from pygrid server hosted locally. The plans are then deserialized and executed using libtorch.

Follow these steps to setup an environment to run the demo app:

git clone https://github.com/OpenMined/PyGrid
cd PyGrid
git checkout 0e93aa645a63a02f45ae72b4ff3106c6402dbadf
git clone https://github.com/OpenMined/PySyft
cd PySyft
git checkout 9d4f8e3ebecc4a00428607403832c5628753f1fc
virtualenv -p python3 venv
source venv/bin/activate
make venv
jupyter notebook

Built on KotlinSyft

Wearable UI

Federated-Wearables is a demo app for cross-device federated learning over wearables. The smartwatch app offloads the collected data to the paired phone app via Bluetooth for training in FL, and gets the updated model to run interface in real time.

Contributing

  1. Star, fork, and clone the repo
  2. Open Android Studio and import project
  3. Do your work.
  4. Push to your fork
  5. Submit a PR to OpenMined/KotlinSyft

Read the contribution guide as a good starting place. Additionally, we welcome you to the slack for queries related to the library and contribution in general. The Slack channel #lib_kotlin_syft is specific to KotlinSyft development, the Slack channel #lib_syft_mobile is meant for both Android and iOS teams. See you there!

Contributors

These people were integral part of the efforts to bring KotlinSyft to fruition and in its active development.


varun khare

💻 ⚠️ 📖 🎨 🚇

Jose A. Corbacho

💻 ⚠️ 🎨 📖 🚇

Ravikant Singh

💻 📖

Saksham Rastogi

📖

Patrick Cason

📖 💼

Mohammed Galalen

📖 ⚠️

Erik Ziegler

🐛

Pengyuan Zhou

🚇

License

Apache License 2.0