intel / ai-containers

This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow and PyTorch that have been optimized for Intel platforms. Scaling is done with python, Docker, kubernetes, kubeflow, cnvrg.io, Helm, and other container orchestration frameworks for use in the cloud and on-premise
https://intel.github.io/ai-containers/
Apache License 2.0
23 stars 17 forks source link

Bump the pytorch group across 1 directory with 13 updates #379

Closed dependabot[bot] closed 3 weeks ago

dependabot[bot] commented 3 weeks ago

Bumps the pytorch group with 13 updates in the /pytorch directory:

Package From To
accelerate 0.33.0 0.34.2
datasets 2.21.0 3.0.0
evaluate 0.4.2 0.4.3
onnxruntime-extensions 0.11.0 0.12.0
onnxruntime 1.18.1 1.19.2
protobuf 5.27.3 5.28.1
scikit-learn 1.5.1 1.5.2
tokenizers 0.19.1 0.20.0
transformers 4.44.0 4.44.2
numpy 1.26.4 2.1.1
jupyterlab 4.3.0b0 4.3.0b1
neural-compressor 3.0 3.0.2
torchtext 0.18.0 0.18.0+cpu

Updates accelerate from 0.33.0 to 0.34.2

Release notes

Sourced from accelerate's releases.

v0.34.1 Patchfix

Bug fixes

  • Fixes an issue where processed DataLoaders could no longer be pickled in #3074 thanks to @​byi8220
  • Fixes an issue when using FSDP where default_transformers_cls_names_to_wrap would separate _no_split_modules by characters instead of keeping it as a list of layer names in #3075

Full Changelog: https://github.com/huggingface/accelerate/compare/v0.34.0...v0.34.1

v0.34.0: StatefulDataLoader Support, FP8 Improvements, and PyTorch Updates!

Dependency Changes

  • Updated Safetensors Requirement: The library now requires safetensors version 0.4.3.
  • Added support for Numpy 2.0: The library now fully supports numpy 2.0.0

Core

New Script Behavior Changes

  • Process Group Management: PyTorch now requires users to destroy process groups after training. The accelerate library will handle this automatically with accelerator.end_training(), or you can do it manually using PartialState().destroy_process_group().
  • MLU Device Support: Added support for saving and loading RNG states on MLU devices by @​huismiling
  • NPU Support: Corrected backend and distributed settings when using transfer_to_npu, ensuring better performance and compatibility.

DataLoader Enhancements

  • Stateful DataDataLoader: We are excited to announce that early support has been added for the StatefulDataLoader from torchdata, allowing better handling of data loading states. Enable by passing use_stateful_dataloader=True to the DataLoaderConfiguration, and when calling load_state() the DataLoader will automatically be resumed from its last step, no more having to iterate through passed batches.
  • Decoupled Data Loader Preparation: The prepare_data_loader() function is now independent of the Accelerator, giving you more flexibility towards which API levels you would like to use.
  • XLA Compatibility: Added support for skipping initial batches when using XLA.
  • Improved State Management: Bug fixes and enhancements for saving/loading DataLoader states, ensuring smoother training sessions.
  • Epoch Setting: Introduced the set_epoch function for MpDeviceLoaderWrapper.

FP8 Training Improvements

  • Enhanced FP8 Training: Fully Sharded Data Parallelism (FSDP) and DeepSpeed support now work seamlessly with TransformerEngine FP8 training, including better defaults for the quantized FP8 weights.
  • Integration baseline: We've added a new suite of examples and benchmarks to ensure that our TransformerEngine integration works exactly as intended. These scripts run one half using 🤗 Accelerate's integration, the other with raw TransformersEngine, providing users with a nice example of what we do under the hood with accelerate, and a good sanity check to make sure nothing breaks down over time. Find them here
  • Import Fixes: Resolved issues with import checks for the Transformers Engine that has downstream issues.
  • FP8 Docker Images: We've added new docker images for TransformerEngine and accelerate as well. Use docker pull huggingface/accelerate@gpu-fp8-transformerengine to quickly get an environment going.

torchpippy no more, long live torch.distributed.pipelining

  • With the latest PyTorch release, torchpippy is now fully integrated into torch core, and as a result we are exclusively supporting the PyTorch implementation from now on
  • There are breaking examples and changes that comes from this shift. Namely:
    • Tracing of inputs is done with a shape each GPU will see, rather than the size of the total batch. So for 2 GPUs, one should pass in an input of [1, n, n] rather than [2, n, n] as before.
    • We no longer support Encoder/Decoder models. PyTorch tracing for pipelining no longer supports encoder/decoder models, so the t5 example has been removed.
    • Computer vision model support currently does not work: There are some tracing issues regarding resnet's we are actively looking into.
  • If either of these changes are too breaking, we recommend pinning your accelerate version. If the encoder/decoder model support is actively blocking your inference using pippy, please open an issue and let us know. We can look towards adding in the old support for torchpippy potentially if needed.

Fully Sharded Data Parallelism (FSDP)

  • Environment Flexibility: Environment variables are now fully optional for FSDP, simplifying configuration. You can now fully create a FullyShardedDataParallelPlugin yourself manually with no need for environment patching:
from accelerate import FullyShardedDataParallelPlugin
fsdp_plugin = FullyShardedDataParallelPlugin(...)
  • FSDP RAM efficient loading: Added a utility to enable RAM-efficient model loading (by setting the proper environmental variable). This is generally needed if not using accelerate launch and need to ensure the env variables are setup properly for model loading:
from accelerate.utils import enable_fsdp_ram_efficient_loading, disable_fsdp_ram_efficient_loading
</tr></table> 

... (truncated)

Commits
  • c61f41c Release: v0.34.2
  • beb4378 Release: v0.34.1
  • e13bef2 Allow DataLoaderAdapter subclasses to be pickled by implementing __reduce__...
  • 73a1531 Fix FSDP auto_wrap using characters instead of full str for layers (#3075)
  • 159c0dd Release: v0.34.0
  • 8931e5e Remove skip_first_batches support for StatefulDataloader and fix all the te...
  • a848592 Speed up tests by shaving off subprocess when not needed (#3042)
  • 758d624 add set_epoch for MpDeviceLoaderWrapper (#3053)
  • b07ad2a Fix typo in comment (#3045)
  • 1d09a20 use duck-typing to ensure underlying optimizer supports schedulefree hooks (#...
  • Additional commits viewable in compare view


Updates datasets from 2.21.0 to 3.0.0

Release notes

Sourced from datasets's releases.

3.0.0

What's Changed

New Contributors

Full Changelog: https://github.com/huggingface/datasets/compare/2.21.0...3.0.0

Commits


Updates evaluate from 0.4.2 to 0.4.3

Release notes

Sourced from evaluate's releases.

0.4.3

This release adds support for datasets>=3.0 by removing calls to deprecated code

What's Changed

Full Changelog: https://github.com/huggingface/evaluate/compare/v0.4.2...v0.4.3

Commits


Updates onnxruntime-extensions from 0.11.0 to 0.12.0

Release notes

Sourced from onnxruntime-extensions's releases.

v0.12.0

What's Changed

  • Added C APIs for language, vision and audio processors including new FeatureExtractor for Whisper model
  • Support for Phi-3 Small Tokenizer and new OpenAI tiktoken format for fast loading of BPE tokenizers
  • Added new CUDA custom operators such as MulSigmoid, Transpose2DCast, ReplaceZero, AddSharedInput and MulSharedInput
  • Enhanced Custom Op Lite API on GPU and fused kernels for DORT
  • Bug fixes, including null bos_token for Qwen2 tokenizer and SentencePiece converted FastTokenizer issue on non-ASCII characters, as well as necessary updates for MSVC 19.40 and numpy 2.0 release

New Contributors

Full Changelog: https://github.com/microsoft/onnxruntime-extensions/compare/v.0.11.0...v0.12.0

Commits
  • cb47d2c Update nuget extraction path for iOS xcframework (#792)
  • b27fbbe Update macosx framework packaging to follow apple guidelines (#776) (#789)
  • c7a2d45 Update build-package-for-windows.yml (#784)
  • 3ce1e9f Upgrade ESRP signing task from v2 to v5 (#780)
  • e113ed3 removed OpenAIAudioToText from config (#777)
  • c9c11b4 Fix the windows API missing issue and Linux shared library size issue for Jav...
  • c3145b8 add the decoder_prompt_id for whisper tokenizer (#775)
  • 620050f reimplement resize cpu kernel for image processing (#768)
  • d79299e increase timeout (#773)
  • 735041e increase timeout (#772)
  • Additional commits viewable in compare view


Updates onnxruntime from 1.18.1 to 1.19.2

Release notes

Sourced from onnxruntime's releases.

ONNX Runtime v1.19.2

Announcements

  • ORT 1.19.2 is a small patch release, fixing some broken workflows and introducing bug fixes.

Build System & Packages

  • Fixed the signing of native DLLs.
  • Disabled absl symbolize in Windows Release build to avoid dependency on dbghelp.dll.

Training

  • Restored support for CUDA compute capability 7.0 and 7.5 with CUDA 12, and 6.0 and 6.1 with CUDA 11.
  • Several fixes for training CI pipelines.

Mobile

  • Fixed ArgMaxOpBuilder::AddToModelBuilderImpl() nullptr Node access for CoreML EP.

Generative AI

  • Added CUDA kernel for Phi3 MoE.
  • Added smooth softmax support in CUDA and CPU kernels for the GroupQueryAttention operator.
  • Fixed number of splits calculations in GroupQueryAttention CUDA operator.
  • Enabled causal support in the MultiHeadAttention CUDA operator.

Contributors

@​prathikr, @​mszhanyi, @​edgchen1, @​tianleiwu, @​wangyems, @​aciddelgado, @​mindest, @​snnn, @​baijumeswani, @​MaanavD

Thanks to everyone who helped ship this release smoothly!

Full Changelog: https://github.com/microsoft/onnxruntime/compare/v1.19.0...v1.19.2

ONNX Runtime v1.19.0

Announcements

Build System & Packages

  • Numpy support for 2.x has been added
  • Qualcomm SDK has been upgraded to 2.25
  • ONNX has been upgraded from 1.16 → 1.16.1
  • Default GPU packages use CUDA 12.x and Cudnn 9.x (previously CUDA 11.x/CuDNN 8.x) CUDA 11.x/CuDNN 8.x packages are moved to the aiinfra VS feed.
  • TensorRT 10.2 support added
  • Introduced Java CUDA 12 packages on Maven.
  • Discontinued support for Xamarin. (Xamarin reached EOL on May 1, 2024)
  • Discontinued support for macOS 11 and increasing the minimum supported macOS version to 12. (macOS 11 reached EOL in September 2023)
  • Discontinued support for iOS 12 and increasing the minimum supported iOS version to 13.

Core

Performance

  • Added QDQ support for INT4 quantization in CPU and CUDA Execution Providers
  • Implemented FlashAttention on CPU to improve performance for GenAI prompt cases

... (truncated)

Commits


Updates protobuf from 5.27.3 to 5.28.1

Commits
  • 10ef3f7 Updating version.json and repo version numbers to: 28.1
  • d70f077 Merge pull request #18191 from protocolbuffers/cp-ruby-upb
  • 60e585c Update staleness
  • 70b77de Fix a potential Ruby-upb use of uninitialized memory.
  • 5b4b3af Merge pull request #18188 from acozzette/28-fix
  • 8ea3bb1 Fix compiler error with StrongReferenceToType()
  • 9deedf0 upb: fix uninitialized upb_MessageValue buffer bugs (#18160)
  • 3454ed8 Merge pull request #18013 from protocolbuffers/28.x-202408281753
  • 976ab41 Updating version.json and repo version numbers to: 28.1-dev
  • 439c42c Updating version.json and repo version numbers to: 28.0
  • Additional commits viewable in compare view


Updates scikit-learn from 1.5.1 to 1.5.2

Release notes

Sourced from scikit-learn's releases.

Scikit-learn 1.5.2

We're happy to announce the 1.5.2 release.

This release contains fixes for a few regressions introduced in 1.5.

You can see the changelog here: https://scikit-learn.org/stable/whats_new/v1.5.html#version-1-5-2

You can upgrade with pip as usual:

pip install -U scikit-learn

The conda-forge builds can be installed using:

conda install -c conda-forge scikit-learn

Thanks to everyone who contributed to this release !

Commits
  • 156ef14 [cd build] trigger ci/cd
  • 40c7416 DOC update the list of contributors for 1.5.2 (#29819)
  • c119c7e DOC add orphan option to developers/index.rst
  • 4d838dc TST fix tolerance as in #29400
  • 2e79f52 DOC fix entry in changelog for backport happening in 1.5.2 (#29815)
  • c735641 MAINT install setuptools for debian-32bits
  • c993dd2 DOC update repr for NumPy 2.0
  • 8ade4f5 MAINT bump from 1.5.1 to 1.5.2
  • 04b71d2 FIX solve conflict git
  • b5b5017 MAINT update lock file
  • Additional commits viewable in compare view


Updates tokenizers from 0.19.1 to 0.20.0

Release notes

Sourced from tokenizers's releases.

Release v0.20.0: faster encode, better python support

Release v0.20.0

This release is focused on performances and user experience.

Performances:

First off, we did a bit of benchmarking, and found some place for improvement for us! With a few minor changes (mostly #1587) here is what we get on Llama3 running on a g6 instances on AWS https://github.com/huggingface/tokenizers/blob/main/bindings/python/benches/test_tiktoken.py : image

Python API

We shipped better deserialization errors in general, and support for __str__ and __repr__ for all the object. This allows for a lot easier debugging see this:

>>> from tokenizers import Tokenizer;
>>> tokenizer = Tokenizer.from_pretrained("bert-base-uncased");
>>> print(tokenizer)
Tokenizer(version="1.0", truncation=None, padding=None, added_tokens=[{"id":0, "content":"[PAD]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":100, "content":"[UNK]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":101, "content":"[CLS]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":102, "content":"[SEP]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":103, "content":"[MASK]", "single_word":False, "lstrip":False, "rstrip":False, ...}], normalizer=BertNormalizer(clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True), pre_tokenizer=BertPreTokenizer(), post_processor=TemplateProcessing(single=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0)], pair=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0), Sequence(id=B, type_id=1), SpecialToken(id="[SEP]", type_id=1)], special_tokens={"[CLS]":SpecialToken(id="[CLS]", ids=[101], tokens=["[CLS]"]), "[SEP]":SpecialToken(id="[SEP]", ids=[102], tokens=["[SEP]"])}), decoder=WordPiece(prefix="##", cleanup=True), model=WordPiece(unk_token="[UNK]", continuing_subword_prefix="##", max_input_chars_per_word=100, vocab={"[PAD]":0, "[unused0]":1, "[unused1]":2, "[unused2]":3, "[unused3]":4, ...}))

>>> tokenizer Tokenizer(version="1.0", truncation=None, padding=None, added_tokens=[{"id":0, "content":"[PAD]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":100, "content":"[UNK]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":101, "content":"[CLS]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":102, "content":"[SEP]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":103, "content":"[MASK]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}], normalizer=BertNormalizer(clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True), pre_tokenizer=BertPreTokenizer(), post_processor=TemplateProcessing(single=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0)], pair=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0), Sequence(id=B, type_id=1), SpecialToken(id="[SEP]", type_id=1)], special_tokens={"[CLS]":SpecialToken(id="[CLS]", ids=[101], tokens=["[CLS]"]), "[SEP]":SpecialToken(id="[SEP]", ids=[102], tokens=["[SEP]"])}), decoder=WordPiece(prefix="##", cleanup=True), model=WordPiece(unk_token="[UNK]", continuing_subword_prefix="##", max_input_chars_per_word=100, vocab={"[PAD]":0, "[unused0]":1, "[unused1]":2, ...}))

The pre_tokenizer.Sequence and normalizer.Sequence are also more accessible now:

from tokenizers import normalizers
norm = normalizers.Sequence([normalizers.Strip(), normalizers.BertNormalizer()])
norm[0]
norm[1].lowercase=False

What's Changed

... (truncated)

Commits
  • a5adaac version 0.20.0
  • a8def07 Merge branch 'fix_release' of github.com:huggingface/tokenizers into branch_v...
  • fe50673 Fix CI
  • b253835 push cargo
  • fc3bb76 update dependencies
  • bfd9cde Perf improvement 16% by removing offsets. (#1587)
  • bd27fa5 add deserialize for pre tokenizers (#1603)
  • 56c9c70 Tests + Deserialization improvement for normalizers. (#1604)
  • 49dafd7 Fix strip python type (#1602)
  • bded212 Support None to reset pre_tokenizers and normalizers, and index sequences (...
  • Additional commits viewable in compare view


Updates transformers from 4.44.0 to 4.44.2

Release notes

Sourced from transformers's releases.

Release v4.44.2

Patch release v4.44.2, mostly 2 regressions that were not caught for Jamba and for processors!

Patch release v4.44.1

Here are the different fixes, mostly Gemma2 context length, nits here and there, and generation issues

Full Changelog: https://github.com/huggingface/transformers/compare/v4.44.0...v4.44.1

Commits


Updates numpy from 1.26.4 to 2.1.1

Release notes

Sourced from numpy's releases.

2.1.1 (Sep 3, 2024)

NumPy 2.1.1 Release Notes

NumPy 2.1.1 is a maintenance release that fixes bugs and regressions discovered after the 2.1.0 release.

The Python versions supported by this release are 3.10-3.13.

Contributors

A total of 7 people contributed to this release. People with a "+" by their names contributed a patch for the first time.

  • Andrew Nelson
  • Charles Harris
  • Mateusz Sokół
  • Maximilian Weigand +
  • Nathan Goldbaum
  • Pieter Eendebak
  • Sebastian Berg

Pull requests merged

A total of 10 pull requests were merged for this release.

  • #27236: REL: Prepare for the NumPy 2.1.0 release [wheel build]
  • #27252: MAINT: prepare 2.1.x for further development
  • #27259: BUG: revert unintended change in the return value of set_printoptions
  • #27266: BUG: fix reference counting bug in __array_interface__ implementation...
  • #27267: TST: Add regression test for missing descr in array-interface
  • #27276: BUG: Fix #27256 and #27257
  • #27278: BUG: Fix array_equal for numeric and non-numeric scalar types
  • #27287: MAINT: Update maintenance/2.1.x after the 2.0.2 release
  • #27303: BLD: cp311- macosx_arm64 wheels [wheel build]
  • #27304: BUG: f2py: better handle filtering of public/private subroutines

Checksums

MD5

3053a97400db800b7377749e691eb39e  numpy-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl
84b752a2220dce7c96ff89eef4f4aec3  numpy-2.1.1-cp310-cp310-macosx_11_0_arm64.whl
47ed4f704a64261f07ca24ef2e674524  numpy-2.1.1-cp310-cp310-macosx_14_0_arm64.whl
b8a45caa870aee980c298053cf064d28  numpy-2.1.1-cp310-cp310-macosx_14_0_x86_64.whl
e097ad5eee572b791b4a25eedad6df4a  numpy-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
ae502c99315884cda7f0236a07c035c4  numpy-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
841a859d975c55090c0b60b72aab93a3  numpy-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl
d51be2b17f5b87aac64ab80fdfafc85e  numpy-2.1.1-cp310-cp310-musllinux_1_2_aarch64.whl
1f8249bd725397c6233fe6a0e8ad18b1  numpy-2.1.1-cp310-cp310-win32.whl
d38d6f06589c1ec104a6a31ff6035781  numpy-2.1.1-cp310-cp310-win_amd64.whl

... (truncated)

Commits
  • 48606ab Merge pull request #27328 from charris/prepare-2.1.1
  • a7cb4c4 REL: Prepare for the NumPy 2.1.1 release [wheel build]
  • 884c92b Merge pull request #27303 from charris/backport-27284
  • ca7f5c1 Merge pull request #27304 from charris/backport-27049
  • 2a49507 BUG: f2py: better handle filtering of public/private subroutines
  • d4306dd TST: Add regression test for gh-26920
  • db9668d BLD: cp311- macosx_arm64 wheels [wheel build]
  • c6ff254 Merge pull request #27287 from charris/post-2.0.2-release-update
  • 326bc17 MAINT: Update main after the 2.0.2 release
  • 8164b7c Merge pull request #27278 from charris/backport-27275
  • Additional commits viewable in compare view


Updates jupyterlab from 4.3.0b0 to 4.3.0b1

Release notes

Sourced from jupyterlab's releases.

v4.3.0b1

4.3.0b1

(Full Changelog)

Enhancements made

Bugs fixed

  • Use locale name instead of display/native name to toggle language #16710 (@​maitreya2954)
  • Add null checks for "input" variable #16705 (@​JasonWeill)
  • Null checks to guard against cell toolbar errors on startup #16704 (@​JasonWeill)
  • Update contents model on file change due to save from RTC #16695 (@​krassowski)
  • Add a guard on uninitialized nodes when resizing, remove log #16693 (@​krassowski)
  • Fix output streaming in RTC
    github-actions[bot] commented 3 weeks ago

    Dependency Review

    The following issues were found:

    • ✅ 0 vulnerable package(s)
    • ✅ 0 package(s) with incompatible licenses
    • ✅ 0 package(s) with invalid SPDX license definitions
    • ⚠️ 1 package(s) with unknown licenses.
    See the Details below.

    License Issues

    pytorch/hf-genai-requirements.txt

    PackageVersionLicenseIssue Type
    protobuf5.28.1NullUnknown License

    OpenSSF Scorecard

    Scorecard details
    PackageVersionScoreDetails
    pip/accelerate 0.34.2 :green_circle: 6.3
    Details
    CheckScoreReason
    Code-Review:green_circle: 9Found 29/30 approved changesets -- score normalized to 9
    Maintained:green_circle: 1030 commit(s) and 17 issue activity found in the last 90 days -- score normalized to 10
    CII-Best-Practices:warning: 0no effort to earn an OpenSSF best practices badge detected
    License:green_circle: 10license file detected
    Signed-Releases:warning: -1no releases found
    Branch-Protection:warning: -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Token-Permissions:warning: 0detected GitHub workflow tokens with excessive permissions
    Security-Policy:warning: 0security policy file not detected
    Fuzzing:warning: 0project is not fuzzed
    Vulnerabilities:green_circle: 100 existing vulnerabilities detected
    Packaging:green_circle: 10packaging workflow detected
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    SAST:green_circle: 4SAST tool is not run on all commits -- score normalized to 4
    pip/datasets 3.0.0 :green_circle: 5.9
    Details
    CheckScoreReason
    Code-Review:green_circle: 4Found 13/30 approved changesets -- score normalized to 4
    Maintained:green_circle: 1030 commit(s) and 8 issue activity found in the last 90 days -- score normalized to 10
    CII-Best-Practices:warning: 0no effort to earn an OpenSSF best practices badge detected
    License:green_circle: 10license file detected
    Signed-Releases:warning: -1no releases found
    Branch-Protection:warning: -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    Packaging:warning: -1packaging workflow not detected
    Security-Policy:green_circle: 10security policy file detected
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Token-Permissions:warning: 0detected GitHub workflow tokens with excessive permissions
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    Vulnerabilities:green_circle: 100 existing vulnerabilities detected
    Fuzzing:warning: 0project is not fuzzed
    SAST:warning: 0SAST tool is not run on all commits -- score normalized to 0
    pip/evaluate 0.4.3 :green_circle: 5.9
    Details
    CheckScoreReason
    Code-Review:green_circle: 9Found 29/30 approved changesets -- score normalized to 9
    Maintained:green_circle: 105 commit(s) and 8 issue activity found in the last 90 days -- score normalized to 10
    CII-Best-Practices:warning: 0no effort to earn an OpenSSF best practices badge detected
    License:green_circle: 10license file detected
    Signed-Releases:warning: -1no releases found
    Branch-Protection:warning: -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
    Packaging:warning: -1packaging workflow not detected
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Token-Permissions:warning: 0detected GitHub workflow tokens with excessive permissions
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    Security-Policy:warning: 0security policy file not detected
    Fuzzing:warning: 0project is not fuzzed
    Vulnerabilities:green_circle: 91 existing vulnerabilities detected
    SAST:green_circle: 4SAST tool is not run on all commits -- score normalized to 4
    pip/onnxruntime 1.19.2 :green_circle: 6.8
    Details
    CheckScoreReason
    Code-Review:green_circle: 10all last 30 commits are reviewed through GitHub
    Maintained:green_circle: 1030 commit(s) out of 30 and 8 issue activity out of 30 found in the last 90 days -- score normalized to 10
    CII-Best-Practices:warning: 0no badge detected
    Vulnerabilities:green_circle: 10no vulnerabilities detected
    Signed-Releases:warning: 00 out of 5 artifacts are signed or have provenance
    Branch-Protection:green_circle: 8branch protection is not maximal on development and all release branches
    Security-Policy:green_circle: 10security policy file detected
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    Packaging:warning: -1no published package detected
    License:green_circle: 10license file detected
    Token-Permissions:warning: 0non read-only tokens detected in GitHub workflows
    Dependency-Update-Tool:green_circle: 10update tool detected
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Fuzzing:warning: 0project is not fuzzed
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    pip/onnxruntime-extensions 0.12.0 :green_circle: 6.1
    Details
    CheckScoreReason
    Code-Review:green_circle: 9Found 27/30 approved changesets -- score normalized to 9
    Maintained:green_circle: 1030 commit(s) and 7 issue activity found in the last 90 days -- score normalized to 10
    CII-Best-Practices:warning: 0no effort to earn an OpenSSF best practices badge detected
    License:green_circle: 10license file detected
    Signed-Releases:warning: -1no releases found
    Branch-Protection:warning: -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
    Packaging:warning: -1packaging workflow not detected
    Security-Policy:green_circle: 10security policy file detected
    Token-Permissions:warning: 0detected GitHub workflow tokens with excessive permissions
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    SAST:warning: 0SAST tool is not run on all commits -- score normalized to 0
    Fuzzing:warning: 0project is not fuzzed
    Vulnerabilities:green_circle: 100 existing vulnerabilities detected
    Binary-Artifacts:green_circle: 7binaries present in source code
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    pip/protobuf 5.28.1 :green_circle: 6.8
    Details
    CheckScoreReason
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Branch-Protection:warning: -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
    CI-Tests:green_circle: 1025 out of 25 merged PRs checked by a CI test -- score normalized to 10
    CII-Best-Practices:warning: 0no effort to earn an OpenSSF best practices badge detected
    Code-Review:warning: 1found 27 unreviewed changesets out of 30 -- score normalized to 1
    Contributors:green_circle: 1012 different organizations found -- score normalized to 10
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    Dependency-Update-Tool:green_circle: 10update tool detected
    Fuzzing:green_circle: 10project is fuzzed
    License:green_circle: 9license file detected
    Maintained:green_circle: 1030 commit(s) out of 30 and 16 issue activity out of 30 found in the last 90 days -- score normalized to 10
    Packaging:warning: -1no published package detected
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    SAST:warning: 0SAST tool is not run on all commits -- score normalized to 0
    Security-Policy:green_circle: 10security policy file detected
    Signed-Releases:warning: 00 out of 5 artifacts are signed or have provenance
    Token-Permissions:green_circle: 10GitHub workflow tokens follow principle of least privilege
    Vulnerabilities:green_circle: 73 existing vulnerabilities detected
    pip/scikit-learn 1.5.2 :green_circle: 9.5
    Details
    CheckScoreReason
    Code-Review:green_circle: 10all changesets reviewed
    Maintained:green_circle: 1030 commit(s) and 22 issue activity found in the last 90 days -- score normalized to 10
    CII-Best-Practices:warning: 0no effort to earn an OpenSSF best practices badge detected
    License:green_circle: 10license file detected
    Signed-Releases:warning: -1no releases found
    Branch-Protection:warning: -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
    Dangerous-Workflow:warning: -1no workflows found
    Security-Policy:green_circle: 10security policy file detected
    Token-Permissions:warning: -1No tokens found
    Packaging:warning: -1packaging workflow not detected
    SAST:green_circle: 10SAST tool is run on all commits
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Vulnerabilities:green_circle: 100 existing vulnerabilities detected
    Pinned-Dependencies:warning: -1no dependencies found
    Fuzzing:green_circle: 10project is fuzzed
    pip/transformers 4.44.2 :green_circle: 4.4
    Details
    CheckScoreReason
    Code-Review:green_circle: 9Found 28/30 approved changesets -- score normalized to 9
    Maintained:green_circle: 1030 commit(s) and 19 issue activity found in the last 90 days -- score normalized to 10
    CII-Best-Practices:warning: 0no effort to earn an OpenSSF best practices badge detected
    License:green_circle: 10license file detected
    Branch-Protection:warning: -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
    Signed-Releases:warning: -1no releases found
    Security-Policy:green_circle: 10security policy file detected
    Dangerous-Workflow:warning: 0dangerous workflow patterns detected
    Token-Permissions:warning: 0detected GitHub workflow tokens with excessive permissions
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    SAST:warning: 0SAST tool is not run on all commits -- score normalized to 0
    Fuzzing:warning: 0project is not fuzzed
    Packaging:green_circle: 10packaging workflow detected
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    Vulnerabilities:warning: 0463 existing vulnerabilities detected
    pip/jupyterlab 4.3.0b1 :green_circle: 5.5
    Details
    CheckScoreReason
    Code-Review:green_circle: 8Found 25/28 approved changesets -- score normalized to 8
    Maintained:green_circle: 1030 commit(s) and 12 issue activity found in the last 90 days -- score normalized to 10
    CII-Best-Practices:warning: 2badge detected: InProgress
    License:green_circle: 9license file detected
    Signed-Releases:warning: 0Project has not signed or included provenance with any releases.
    Branch-Protection:warning: -1internal error: error during GetBranch(4.2.x): error during branchesHandler.query: internal error: githubv4.Query: Resource not accessible by integration
    Packaging:warning: -1packaging workflow not detected
    Token-Permissions:warning: 0detected GitHub workflow tokens with excessive permissions
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    SAST:green_circle: 10SAST tool is run on all commits
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Security-Policy:green_circle: 10security policy file detected
    Fuzzing:warning: 0project is not fuzzed
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    Vulnerabilities:warning: 010 existing vulnerabilities detected
    pip/neural-compressor 3.0.2 :green_circle: 7.2
    Details
    CheckScoreReason
    Code-Review:green_circle: 8Found 24/29 approved changesets -- score normalized to 8
    Maintained:green_circle: 1030 commit(s) and 7 issue activity found in the last 90 days -- score normalized to 10
    License:green_circle: 10license file detected
    CII-Best-Practices:warning: 2badge detected: InProgress
    Signed-Releases:warning: -1no releases found
    Packaging:warning: -1packaging workflow not detected
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    Security-Policy:green_circle: 10security policy file detected
    Branch-Protection:green_circle: 8branch protection is not maximal on development and all release branches
    Token-Permissions:green_circle: 10GitHub workflow tokens follow principle of least privilege
    SAST:green_circle: 10SAST tool is run on all commits
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Fuzzing:warning: 0project is not fuzzed
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    Vulnerabilities:warning: 092 existing vulnerabilities detected
    pip/torchtext 0.18.0+cpu :green_circle: 5.3
    Details
    CheckScoreReason
    Maintained:warning: 12 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 1
    Code-Review:green_circle: 8Found 25/30 approved changesets -- score normalized to 8
    CII-Best-Practices:warning: 0no effort to earn an OpenSSF best practices badge detected
    License:green_circle: 10license file detected
    Signed-Releases:warning: -1no releases found
    Branch-Protection:warning: -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
    Dangerous-Workflow:green_circle: 10no dangerous workflow patterns detected
    Packaging:warning: -1packaging workflow not detected
    Binary-Artifacts:green_circle: 10no binaries found in the repo
    Token-Permissions:warning: 0detected GitHub workflow tokens with excessive permissions
    Security-Policy:warning: 0security policy file not detected
    Fuzzing:warning: 0project is not fuzzed
    Pinned-Dependencies:warning: 0dependency not pinned by hash detected -- score normalized to 0
    Vulnerabilities:green_circle: 100 existing vulnerabilities detected
    SAST:green_circle: 8SAST tool detected but not run on all commits

    Scanned Manifest Files

    pytorch/hf-genai-requirements.txt
    • accelerate@0.34.2
    • datasets@3.0.0
    • evaluate@0.4.3
    • onnxruntime@1.19.2
    • onnxruntime-extensions@0.12.0
    • protobuf@5.28.1
    • scikit-learn@1.5.2
    • transformers@4.44.2
    • accelerate@0.33.0
    • datasets@2.21.0
    • evaluate@0.4.2
    • onnxruntime@1.18.1
    • onnxruntime-extensions@0.11.0
    • protobuf@5.27.3
    • scikit-learn@1.5.1
    • transformers@4.44.0
    pytorch/jupyter-requirements.txt
    • jupyterlab@4.3.0b1
    • jupyterlab@4.3.0b0
    pytorch/multinode/requirements.txt
    • neural-compressor@3.0.2
    • neural-compressor@3.0
    pytorch/serving/torchserve-requirements.txt
    • torchtext@0.18.0+cpu
    • torchtext@0.18.0
github-advanced-security[bot] commented 3 weeks ago

This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation.