Closed jhunter533 closed 5 months ago
In the current 3.0.x release you would need to provide a pretrained model, and getting a keras model running in CrossSim can take some tinkering. We're trying to make a few improvements to this on the pytorch branch: https://github.com/sandialabs/cross-sim/tree/pytorch/simulator/algorithms/dnn/torch
from_torch()
method an existing torch model can loaded and all of the layers for which we've written CrossSim equivalents (Linear and Conv2d) will be converted to those equivalents while everything will run through torch. We don't have any nice scripts available to assist with this yet, but we are working on that and should have those out in the next few weeks. Additionally, a similar keras interface (to supplement, and eventually replace) the current keras model runner is almost complete and we expect to have a version ready for people to poke at within the next couple weeks.synchronize
function must be called after each optimizer iteration. We don't have any tutorials or scripts to demonstrate this, but we are working on those. You should expect this to be somewhat slower than a torch-native training loop by 2-5x depending on your system. Therefore, we wouldn't advise training from scratch, but fine-tuning for analog nonidealities should be reasonable.Just adding that William Chapman put together a great tutorial on the new pytorch interface: https://github.com/sandialabs/cross-sim/blob/pytorch/tutorial/NICE2024/tutorial_pt2.ipynb
Section 2.2 shows the conversion process, and Section 2.3 uses CrossSim-in-the-loop training. Part 3 shows a more involved example, hopefully this will help you get started.
This might be a misunderstanding on my part, but If I wanted to make my own model where the goal is to eventually have the computations be on memristors would I pretrain the model in tensorflow and similarly to provided models import it in or would I build the model through cross-sim. edit: I should specify its for neural network stuff.