Bender allows you to run trained models, you can use Tensorflow, Keras, Caffe, the choice is yours. Either freeze the graph or export the weights to files.
* support coming soon.
You can import a frozen graph directly from supported platforms or re-define the network structure and load the weights. Either way it just takes a few minutes.
Bender suports the most common ML nodes and layers but it is also extensible so you can write your own custom functions.
20 layers deep ~ 20fps
60 layers deep ~ 8fps
With Core ML, you can integrate trained machine learning models into your app, it supports Caffe and Keras 1.2.2+ at the moment. Apple released conversion tools to create CoreML models which then can be run easily.
The main problem of CoreML is the limited amout of layers supported. If your model includes a custom layer or one which is not supported there is no way yet to add it.
Additionally, CoreML limits the GPU performance at its will (supposedly to ensure battery life). Bender gives you 100% control by running directly on the GPU using the MPS API. Just remember that with a great power comes a great responsability.
Finally, there is no easy way to add additional pre or post-processing layers to run on the GPU along with the model which are often needed to adapt the input and output.
Bender is up to 40% faster when running the same model
Limited supported layers
No extensibility for custom layers
Pre/Post-processing needs to run separately
Needs iOS 11 (Bender works on iOS 10+)
Making an appearence on HBO’s Sillicon Valley, the lead consultant actually built the app which became quite a hit within the fans.
“When I built NotHotdog there was not much available, but I would definitely use Bender now!”
Lead consultant at HBO – Sillicon Valley
Bender includes a TensorFlow converter which lets you import a model directly. Other converters are coming soon and you can also write your own. You can use benderthon to export your TF model.
import MetalBender
let url = Bundle.main.url(forResource: "myModel", withExtension: "pb")! // A TensorFlow model.
let network = Network.load(url: url, inputSize: LayerSize(h: 256, w: 256, f: 3))
network.run(input: /* ... */) { output in
// ...
}
Bender also provides a modern API to create and run the layers of a model. You can define your network using the Bender API and then load the weights from binary files. This approach can be simpler if you are using custom layers and don’t want to fiddle around with parsers and converters.
Bender also provides a modern API to create and run the layers of a model. You can define your network using the Bender API and then load the weights from binary files. This approach can be simpler if you are using custom layers and don’t want to fiddle around with parsers and converters.
def style_net(image):
conv1 = conv_layer(“conv1”, image, 32, 9, 2)
conv2 = conv_layer(“conv2”, conv1, 64, 3, 2)
conv3 = conv_layer(“conv3”, conv2, 128, 3, 2)
resid1 = residual_block(“res_block1”, conv3, 3)
resid2 = residual_block(“res_block2”, res_block1, 3)
resid3 = residual_block(“res_block3”, res_block2, 3)
resid4 = residual_block(“res_block4”, res_block13, 3)
convt1 = conv_transpose_layer(“convt1”, resid4, 64, 3, 2)
convt2 = conv_transpose_layer(“convt2”, convt1, 32, 3, 2)
convf = conv_layer(“convFinal”, convt2, 3, 5, 1, relu=False)
net = tf.nn.tanh(convf)
return net
styleNet = Network(device: device, inputSize: inputSize, parameterLoader: loader)
styleNet.start
->> Convolution(size: ConvSize(outputChannels: 32, kernelSize: 9, stride: 1), id: “conv1”)
->> Convolution(size: ConvSize(outputChannels: 64, kernelSize: 3, stride: 2), id: “conv2”)
->> Convolution(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 2), id: “conv3”)
->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block1”)
->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block2”)
->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block3”)
->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block4”)
—>> ConvTranspose(size: ConvSize(outputChannles: 64, kernelSize: 3, stride: 2), id: “convt1”)
—>> ConvTranspose(size: ConvSize(outputChannles: 32, kernelSize: 3, stride: 2), id: “convt2”)
->> Convolution(size: ConvSize(outputChannels: 3, kernelSize: 9, stride: 1), neuron: .tanh, id: “convFinal”)