You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 17, 2025. It is now read-only.
I have developed a small cross-platform plugin that implements neural network technology. As TorchSharp has too large storage requirements, I created this repository and the expectation is to provided a neural network minimization implementation.
I have copy source code from this repository Neural Networks, and refactor all of them to improve performance or expand functionality.
Specifically, I have replaced double with float, removed the Winform reference, and ensured that the sample can be compiled and run on all platforms.
As Microsoft no longer supports Visual Studio 2019, I updated the project TargetFramework to net5.0. You can clone the source code and modify it according to your needs.
All of the code is written in C#, and the project use the MIT LICENSE.
Multi-thread computing is supported, but performance is lower than TorchSharp.
If you need faster performance, it is recommended to use other popular deep learning frameworks such as TensorFlow, PyTorch, or Keras.
UnitTest.SerizlizeSampleNN must Compile with symbol UNIT_TEST or DEBUG
NeuralNetwork.SaveTo, NeuralNetwork.LoadFrom implement by NeuralNetworkModel.
NeuralNetworkModel model = new NeuralNetworkModel();
model.CopyFrom(network);
using (MemoryStream memoryStream = new MemoryStream())
{
// serialize
model.Write(memoryStream);
// deserialize
model = new NeuralNetworkModel();
model.Read(memoryStream);
}
If you forget to invoke NeuralNetwork.SetMultiThread(false), inner thread will be revert to CalculateThread.IdleThread when GC invoke NeuralNetwork.Finalize
~NeuralNetwork()
{
SetMultiThread(false);
}
Only one thread will be cache at CalculateThread.IdleThread and waiting for NeuralNetwork.SetMultiThread(true) resume it, superfluous thread will be abort.
If you train hundreds of NeuralNetwork in memory and both want to use multi thread (Suppose 300,300,300 NeuralNetwork, 90 Mb and 1 thread for each NeuralNetwork instance),
You'd better manual set thread instance.
The instance of CalculateThread can't be reference by different instance of NeuralNetwork in the same time.
CalculateThread calculateThread = CalculateThread.IdleThread; // get recycle thread or new thread
for (int i = 0; i < 100; i++)
{
neuralNetworkList[i].CalculateThread = calculateThread; // replace SetMultiThread(true)
neuralNetworkList[i].Train(xxx, xxx);
neuralNetworkList[i].CalculateThread = null; // replace SetMultiThread(false)
}
calculateThread.Abort(); // mark thread abort, it will run to exit
Compare with TorchSharp
Functions
LittleNN support Activation.Sigmoid only, there are many fewer functions to choose from than TourchSharp.
If you want a powerful library for learning neural network, I recommend TourchSharp.
Caculate duration, 200 times Network.Forward() total
MacOS, 2.6 GHz 6-Core Intel Core i7
LittleNN NeuralNetworkModel.QuickForward replace 'Neural and Synapse instance' with 'float[]'. Due to element of float[] are continuous on the Memory, NeuralNetworkModel.QuickForward is faster than NeuralNetwork.Forward.
If estimated operational volume less than CalculateThread.AmountOfComputation, NeuralNetwork calculate in single thread.
Estimated operational volume = neural count of A layer * neural count of B layer * 10
CalculateThread.AmountOfComputation = 6400 * 10
In unit test, NeuralNetworkModel.QuickForward with multithreading have performance improvement less than 5%, so I dropped support for NeuralNetworkModel Multithreading.
In test(192,100,100,11), thread synchronization wastes more time than the computing boost from multithreading.
InputSize,HideSize,OutputSize
TourchSharp-cpu
NeuralNetwork.SingleThread
NeuralNetwork.MultiThread
NeuralNetworkModel.QuickForward
192,20,20,11
≈13ms
≈2ms
\
≈4ms
192,30,30,11
≈14ms
≈3ms
\
≈4ms
192,100,100,11
≈16ms
≈12ms
≈19ms
≈13ms
192,300,300,300,11
≈21ms
≈138ms
≈92ms
≈73ms
Contributions
LittleNN is enough for beginner.
Contributions and feedback are welcome. If you have any suggestions or bugs you want to report, please open an issue or pull request.
About
The expectation is to provided a neural network minimization implementation. All of the code is written in C#, and the project is licensed under MIT LICENSE.