SciSharp / NumSharp

High Performance Computation for N-D Tensors in .NET, similar API to NumPy.
https://github.com/SciSharp
Apache License 2.0
1.34k stars 188 forks source link

Add missing casting operators and rewrite bitwise AND for NDArray #417

Closed daerhiel closed 3 years ago

daerhiel commented 4 years ago
  1. TensorFlow.NET constant_op method refers to the byte type conversion on NumSharp.Lite and it's going to break when migrating to the NumSharp.Core, so I decided to add missing type conversions there image

  2. I've found the broken tests in NDArray.AND and decided to rewrite and extend the operator set, so it would cover all integral scalars. I've made 2 private generic utilities that can run a projector delegate against array elements, so you'll have uniform and maintainable code across all operations. You can use them pretty much everywhere to apply element-size operators on arrays. The code is documented, validated and unit tested. I'd also add the Shape validation, but I didn't figure out what's the best strategy there.

Offtop: I've found a lot of copy-pasted code that could be pretty much solved with generics. It would be more compact and reusable and reduce the maintenance hell for you.. So.. The good idea is to put the refactoring in the roadmap, this would make the work on a project a lot easier..

daerhiel commented 4 years ago

PS/ I decided not to create the separate branch as change doesn't affect the code base significantly in any way, but add the new features and fixes missing features.

Oceania2018 commented 4 years ago

@daerhiel Can you also make sure NumSharp is working for SharpCV ?

daerhiel commented 4 years ago

@daerhiel Can you also make sure NumSharp is working for SharpCV ?

Haipin, SharpCV points to NumShart.Lite, and I have updated NumSharp.Core. But if you're curious, SharpCV doesn't compile with .Core in one of the test methods, as Core version doesn't have required deconstructors:

image

image

But if you comment it out, the rest of the tests pass, except VideoCaptureFromFile, which fails for Lite and Core.

daerhiel commented 3 years ago

@Nucs, I see that _FindCommonType infers the resulting type to increase precision, which enforces operator like that:

public static double BitwiseAnd(ulong lhs, long rhs) => (long)lhs & rhs;

Which is not very suitable for bitwise binary operations.. What's your suggestion here?

Nucs commented 3 years ago

@Nucs, I see that _FindCommonType infers the resulting type to increase precision, which enforces operator like that:

public static double BitwiseAnd(ulong lhs, long rhs) => (long)lhs & rhs;

Which is not very suitable for bitwise binary operations.. What's your suggestion here?

_FindCommonType resolves the output type if you were to use what ever types you pass as parameters. The method is written based on the python's numpy's return type inference.

You need to open a python console and go straight ahead and see what type is returned when doing binary ops. If its different then we need to have a separate inference for probably all other binary operations. Make sure to actually test against numpy first.

Nucs commented 3 years ago

Couple of hacks for you to know of: InfoOf that gives you statically cached information like the NPTypeCode Regen which we use throughout the project to generate repetitive code. its a compiler I've written just for this project. Search the term "_REGEN" inside the solution and you'll see the amount of code it generates. NDIterator might come handy when you'll get to more algorithmic work. Backends/Default/Default.Broadcasting.cs and np.broadcast.Tests.cs for numpy identical rewrite of their broadcasting mechanisms. Especially Broadcast(Shape leftShape, Shape rightShape)

Nucs commented 3 years ago

@daerhiel Any reason you closed it? Should I review it?

daerhiel commented 3 years ago

@Nucs nope.. I'm sorry, I see not reason to continue.