-
Great work!
Any chance you add support for 3-bit ? I know the bitpacking is a bit tricky with 3-bit, but it would be great to a have a 3-bit kernel for linear quantization, since the only one availa…
-
# Bug Report
**Orion version:** 0.2.5
**Current behavior:** not compiled
**Expected behavior:** compiled
**Steps to reproduce:**
```bash
git clone git@github.com:gizatechxyz/…
-
re compatability with units library ( and indeed other libraries)
https://github.com/mpusz/units
Both units library and la library should constrain operators multiplying by scalars to avoid ambi…
-
# Check List
- [x] JavaScript: Desestructuración de array y objetos (en consola)
- [x] React: Hooks - useState()
- [x] Time to Code: useState() - componente: StateExample
- [ ] Proyecto: Traffic…
-
Currently some solvers (e.g. `pdhg`) only support a single operator, while some (e.g. `douglas_rachford_pd`) only support list of operators.
Converting between these is trivial but takes a few line…
-
Given operators `opA`, `opM` and `opN`, creates saddle point linear operator.
```juliadocs
SaddleOperator(opA, opM, opN; S = promote_type(storage_type.(opA, opM, opN)))
Creates saddle point l…
-
# Failing Tests
> Please see the failing tests divided into sections below. Click on each section to expand. Feel free to get assigned to an issue by following the instructions [here](https://unify.ai…
-
Following the discussion in #80 I thought it would be good to lay out some ideas for a general block matrix/operator abstraction.
Some useful types of block operators:
- Block tridiagonal where ea…
-
1. Minor update for SFCSHP prepbufr converter - edited/added some height variables.
- Edited/added height variables
- Added ObsErrors for the simulated variables except stationEleveation (…
-
```python
import torch
from transformer_engine.pytorch import Linear as TELinear, fp8_autocast
# m = torch.nn.Linear(16, 16).to("cuda") # This works
m = TELinear(16, 16)
x = torch.randn(16, 16…