TVM is a compiler for machine learning frameworks that can optimize and target kernels to several different backends. Relay is a high level intermediate representation for the TVM framework. The goal of Relay is to replace old computation graph based IRs with a more expressive IR. More information can be found in this paper.
The TVM Relay frontend lives in the relay-lang folder in the Calyx repository and generates Calyx components from the Relay intermediate representation.
Clone the TVM repository and checkout the tag
git clone firstname.lastname@example.org:apache/incubator-tvm.git cd incubator-tvm && git checkout v0.10.dev0 git submodule init && git submodule update
Set up to build (the default configuration is fine because we don't need any fancy backends like LLVM or CUDA):
mkdir build && cd build cp ../cmake/config.cmake .
cmake -G Ninja .. && ninja
tvmPython package by building a wheel:
cd ../python && python3 setup.py bdist_wheel pip3 install --user dist/tvm-*.whl
If you get an error with
shutil, try deleting the
python/directory, restoring it, and rerunning the above command:
cd .. && rm -rf python && git checkout -- python
If you are on MacOS - Big Sur and are getting an error similar to "(wheel).whl is not a supported wheel on this platform", try changing part of the wheel's filename from 11_0 to 10_9. See this github issue for more information.
Install ANTLR v4.7.2 (required for the Relay text format parser):
pip3 install -Iv antlr4-python3-runtime==4.7.2
pip3 install pytest
Install Dahlia, which is used when lowering Relay call nodes to Calyx.
Install the calyx-py library.
Try this to run a simple example:
cd calyx/frontends/relay python3 example.py tensor_add
-h: Help option; shows available examples.
-r: Dumps the Relay IR. Otherwise, it dumps the Calyx output.
A simple script is provided to run an Open Neural Network Exchange (ONNX) model. In addition to installing TVM Relay above, you'll need the following PIP installations for ONNX simulation and image pre-processing:
pip3 install opencv-python Pillow mxnet onnx simplejson
For example, we can simulate the LeNet ONNX model found here using the following command:
python3 frontends/relay/onnx_to_calyx.py \ -n "lenet" \ -d "MNIST" \ -i "/path/to/image.png" \ -onnx "/path/to/model.onnx" \ -o calyx
-n: The name of the input net. This is mostly used for naming the output files.
-d: The dataset for which the input will be classified against. This is necessary to determine what preprocessing should be done on the image. e.g.
-i: The file path to the input image which you want classified.
-onnx: The file path to the ONNX model.
-o: The type of output.
tvm: Executes the ONNX model using the TVM executor. Prints the final softmax value to console. No postprocessing is conducted.
relay: Output a file with the corresponding Relay program.
calyx: Output a
.datafile and Calyx program for simulation.
all: All the above.
-s: This is an optional boolean argument that signifies
save_mem, and is set to true by default. If this flag is set to true, then it will produce a Calyx design that requires less internal memory usage compared to the design that is produced when this flag is false.