Spidy20 / Smart_Resume_Analyser_App

This is web application for the Resume Analyser.
168 stars 127 forks source link

Configuration error #4

Open angadgaikwad opened 1 year ago

angadgaikwad commented 1 year ago

It will show ConfigValidationError: Config validation error disabled field required tokenizer field required before_creation field required after_creation field required after_pipeline_creation field required {'pipeline': ['tok2vec', 'tagger', 'parser', 'ner'], 'lang': 'en', 'batch_size': 1000}

How to solve this?

SurajSanap commented 10 months ago

The error you're encountering suggests that there are missing or incorrect configuration fields when trying to create a spaCy pipeline. You need to provide a valid configuration to create the spaCy pipeline. The required fields include tokenizer, before_creation, after_creation, and after_pipeline_creation.

Here's an example of how you can create a spaCy pipeline with a valid configuration:

import spacy

# Define your custom components or use the default ones
def custom_before_creation(nlp, **cfg):
    # Add any custom logic before pipeline creation
    return nlp

def custom_after_creation(nlp, **cfg):
    # Add any custom logic after pipeline creation
    return nlp

def custom_after_pipeline_creation(nlp, **cfg):
    # Add any custom logic after each component in the pipeline is created
    return nlp

# Define the spaCy pipeline configuration
config = {
    "pipeline": ["tok2vec", "tagger", "parser", "ner"],
    "lang": "en",
    "batch_size": 1000,
    "before_creation": custom_before_creation,
    "after_creation": custom_after_creation,
    "after_pipeline_creation": custom_after_pipeline_creation,
}

# Create the spaCy pipeline
nlp = spacy.blank("en")
nlp.from_config(config)

# Now, you can use the created pipeline (nlp) for processing text

In this example:

Make sure to adapt the custom functions to your specific needs, and provide any additional components or logic required for your spaCy pipeline.