issues
search
ObrienlabsDev
/
machine-learning
Machine Learning - AI - Tensorflow - Keras - NVidia - Google
MIT License
0
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Google gemma-2-9b cublas error running on 48G A6000 ampere under CUDA 12.4 and dual 24G 4090 under 12.5
#28
obriensystems
opened
3 days ago
0
Google Gemma 2 27B is out - setup inference and upgrade transformers - run on 48G A6000 Ada and 128G 14900K
#27
obriensystems
opened
5 days ago
11
Nvidia Certification
#26
obriensystems
opened
1 week ago
0
Investigate cuda requirement of linux os soecifically for demand paged virtual unifoed memory management
#25
obriensystems
opened
1 month ago
0
ML weather forecasting - 500k radar images per site
#24
obriensystems
opened
2 months ago
0
A6000 performance varies across top 790 motherboards by 25%
#23
obriensystems
opened
3 months ago
1
RAG for Google Gemma 2B and 7B
#22
obriensystems
opened
3 months ago
0
Using PeFT (Parameter efficient Fine Tuning) and the larger Google Gemma 7B model to generate a training set to customize the Gemma 2B model
#21
obriensystems
opened
3 months ago
0
Google TPU Research Cloud allowlist provisioning
#20
obriensystems
opened
3 months ago
1
Kubeflow and vertex ai pipelines for ml ops
#19
obriensystems
opened
3 months ago
0
Mlkit for android TPU and ios
#18
obriensystems
opened
3 months ago
0
tensorflow cpu
#17
obriensystems
opened
3 months ago
0
verify issue running tensorflow/tensorflow:latest-gpu on dual RTX-A4500 with nvlink not on dual RTX-4090 PCIeX8 : ensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
#16
obriensystems
opened
3 months ago
0
GPU provisioning on Lambda Labs for H100
#15
obriensystems
opened
3 months ago
0
GPU provisioning on AWS - specifically 48G NVidia L40S, 80G H100 - and compare to 48G RTX-A6000
#14
obriensystems
opened
3 months ago
1
Google Gemma 7B 2B OSS models are available on Hugging Face as of 20240221
#13
obriensystems
opened
4 months ago
22
Learned context in LLM via user authentication, model saving
#12
obriensystems
opened
4 months ago
0
Immersive Stream using GCP L4 for XR
#11
obriensystems
opened
4 months ago
0
llama.cpp on Nvidia RTX-3500, RTX-A4500 dual, RTX-4090 dual
#10
obriensystems
opened
4 months ago
14
Investigate RAG from Meta
#9
obriensystems
opened
4 months ago
0
Tensorflow for Cuda and Metal using java
#8
obriensystems
opened
5 months ago
0
llama.cpp on Mac Silicon M1Max and M2Ultra
#7
obriensystems
opened
5 months ago
12
Google Cloud TPUv5 training
#6
obriensystems
opened
7 months ago
0
Work with Google C4 dataset of common crawl
#5
obriensystems
opened
7 months ago
0
Adjust strategy in TensorFlow code to allow for more than 2 GPUs
#4
obriensystems
opened
7 months ago
0
TensorFlow on Google Cloud G2 VMs running multiple L4 GPUs
#3
obriensystems
opened
7 months ago
2
TensorFlow on Intel, NVidia and OSX platforms
#2
obriensystems
opened
7 months ago
3
ML/AI on various platforms
#1
obriensystems
opened
9 months ago
3