aspeddro / cmp-pandoc.nvim

Pandoc source for nvim-cmp
MIT License
38 stars 9 forks source link

Bibliography autocomplete error #1

Closed TudorAndrei closed 2 years ago

TudorAndrei commented 2 years ago

Hello, I get the following error when using the autocomplete from bibliography.

Error detected while processing TextChangedI Autocommands for "*":
E5108: Error executing lua ...utoload/plugged/cmp-pandoc.nvim/lua/cmp_pandoc/parse.lua:142: invalid value (nil) at index 3 in table for 'concat'
stack traceback:
        [C]: in function 'concat'
        ...utoload/plugged/cmp-pandoc.nvim/lua/cmp_pandoc/parse.lua:142: in function <...utoload/plugged/cmp-pandoc.nvim/lua/cmp_pandoc/parse.lua:133>
        vim/shared.lua: in function 'citations'
        ...utoload/plugged/cmp-pandoc.nvim/lua/cmp_pandoc/parse.lua:157: in function 'bibliography'
        ...utoload/plugged/cmp-pandoc.nvim/lua/cmp_pandoc/parse.lua:223: in function 'init'
        ...toload/plugged/cmp-pandoc.nvim/lua/cmp_pandoc/source.lua:17: in function 'complete'
        ...config/nvim/autoload/plugged/nvim-cmp/lua/cmp/source.lua:290: in function 'complete'
        .../.config/nvim/autoload/plugged/nvim-cmp/lua/cmp/core.lua:253: in function 'complete'
        .../.config/nvim/autoload/plugged/nvim-cmp/lua/cmp/core.lua:166: in function 'callback'
        .../.config/nvim/autoload/plugged/nvim-cmp/lua/cmp/core.lua:216: in function 'autoindent'
        .../.config/nvim/autoload/plugged/nvim-cmp/lua/cmp/core.lua:158: in function 'on_change'
        .../.config/nvim/autoload/plugged/nvim-cmp/lua/cmp/init.lua:311: in function 'callback'
        ...nvim/autoload/plugged/nvim-cmp/lua/cmp/utils/autocmd.lua:31: in function 'emit'
        [string ":lua"]:1: in main chunk
aspeddro commented 2 years ago

can you share your bib file?

TudorAndrei commented 2 years ago

This is the bib file

@article{corso_principal_2020,
    title = {Principal {Neighbourhood} {Aggregation} for {Graph} {Nets}},
    url = {http://arxiv.org/abs/2004.05718},
    abstract = {Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data. Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces. We extend this theoretical framework to include continuous features - which occur regularly in real-world input domains and within the hidden layers of GNNs - and we demonstrate the requirement for multiple aggregation functions in this context. Accordingly, we propose Principal Neighbourhood Aggregation (PNA), a novel architecture combining multiple aggregators with degree-scalers (which generalize the sum aggregator). Finally, we compare the capacity of different models to capture and exploit the graph structure via a novel benchmark containing multiple tasks taken from classical graph theory, alongside existing benchmarks from real-world domains, all of which demonstrate the strength of our model. With this work, we hope to steer some of the GNN research towards new aggregation methods which we believe are essential in the search for powerful and robust models.},
    urldate = {2022-01-21},
    journal = {arXiv:2004.05718 [cs, stat]},
    author = {Corso, Gabriele and Cavalleri, Luca and Beaini, Dominique and Liò, Pietro and Veličković, Petar},
    month = dec,
    year = {2020},
    note = {arXiv: 2004.05718},
    keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Statistics - Machine Learning},
    annote = {Comment: 34th Conference on Neural Information Processing Systems (NeurIPS 2020)},
}

@article{kreuzer_rethinking_2021,
    title = {Rethinking {Graph} {Transformers} with {Spectral} {Attention}},
    url = {http://arxiv.org/abs/2106.03893},
    abstract = {In recent years, the Transformer architecture has proven to be very successful in sequence processing, but its application to other data structures, such as graphs, has remained limited due to the difficulty of properly defining positions. Here, we present the \${\textbackslash}textit\{Spectral Attention Network\}\$ (SAN), which uses a learned positional encoding (LPE) that can take advantage of the full Laplacian spectrum to learn the position of each node in a given graph. This LPE is then added to the node features of the graph and passed to a fully-connected Transformer. By leveraging the full spectrum of the Laplacian, our model is theoretically powerful in distinguishing graphs, and can better detect similar sub-structures from their resonance. Further, by fully connecting the graph, the Transformer does not suffer from over-squashing, an information bottleneck of most GNNs, and enables better modeling of physical phenomenons such as heat transfer and electric interaction. When tested empirically on a set of 4 standard datasets, our model performs on par or better than state-of-the-art GNNs, and outperforms any attention-based model by a wide margin, becoming the first fully-connected architecture to perform well on graph benchmarks.},
    urldate = {2022-01-21},
    journal = {arXiv:2106.03893 [cs]},
    author = {Kreuzer, Devin and Beaini, Dominique and Hamilton, William L. and Létourneau, Vincent and Tossou, Prudencio},
    month = oct,
    year = {2021},
    note = {arXiv: 2106.03893},
    keywords = {Computer Science - Machine Learning},
    annote = {Comment: Accepted in Proceedings of NeurIPS 2021},
}

@article{yun_graph_2020,
    title = {Graph {Transformer} {Networks}},
    url = {http://arxiv.org/abs/1911.06455},
    abstract = {Graph neural networks (GNNs) have been widely used in representation learning on graphs and achieved state-of-the-art performance in tasks such as node classification and link prediction. However, most existing GNNs are designed to learn node representations on the fixed and homogeneous graphs. The limitations especially become problematic when learning representations on a misspecified graph or a heterogeneous graph that consists of various types of nodes and edges. In this paper, we propose Graph Transformer Networks (GTNs) that are capable of generating new graph structures, which involve identifying useful connections between unconnected nodes on the original graph, while learning effective node representation on the new graphs in an end-to-end fashion. Graph Transformer layer, a core layer of GTNs, learns a soft selection of edge types and composite relations for generating useful multi-hop connections so-called meta-paths. Our experiments show that GTNs learn new graph structures, based on data and tasks without domain knowledge, and yield powerful node representation via convolution on the new graphs. Without domain-specific graph preprocessing, GTNs achieved the best performance in all three benchmark node classification tasks against the state-of-the-art methods that require pre-defined meta-paths from domain knowledge.},
    urldate = {2022-01-21},
    journal = {arXiv:1911.06455 [cs, stat]},
    author = {Yun, Seongjun and Jeong, Minbyul and Kim, Raehyun and Kang, Jaewoo and Kim, Hyunwoo J.},
    month = feb,
    year = {2020},
    note = {arXiv: 1911.06455
version: 2},
    keywords = {Computer Science - Machine Learning, Statistics - Machine Learning, Computer Science - Social and Information Networks},
    annote = {Comment: Neural Information Processing Systems (NeurIPS), 2019},
}

@article{xhonneux_continuous_2020,
    title = {Continuous {Graph} {Neural} {Networks}},
    url = {http://arxiv.org/abs/1912.00967},
    abstract = {This paper builds on the connection between graph neural networks and traditional dynamical systems. We propose continuous graph neural networks (CGNN), which generalise existing graph neural networks with discrete dynamics in that they can be viewed as a specific discretisation scheme. The key idea is how to characterise the continuous dynamics of node representations, i.e. the derivatives of node representations, w.r.t. time. Inspired by existing diffusion-based methods on graphs (e.g. PageRank and epidemic models on social networks), we define the derivatives as a combination of the current node representations, the representations of neighbors, and the initial values of the nodes. We propose and analyse two possible dynamics on graphs---including each dimension of node representations (a.k.a. the feature channel) change independently or interact with each other---both with theoretical justification. The proposed continuous graph neural networks are robust to over-smoothing and hence allow us to build deeper networks, which in turn are able to capture the long-range dependencies between nodes. Experimental results on the task of node classification demonstrate the effectiveness of our proposed approach over competitive baselines.},
    urldate = {2022-01-26},
    journal = {arXiv:1912.00967 [cs, stat]},
    author = {Xhonneux, Louis-Pascal A. C. and Qu, Meng and Tang, Jian},
    month = jul,
    year = {2020},
    note = {arXiv: 1912.00967},
    keywords = {Computer Science - Machine Learning, Statistics - Machine Learning},
}

@inproceedings{nicolas_experiment_2021,
    address = {Online},
    title = {An {Experiment} on {Implicitly} {Crowdsourcing} {Expert} {Knowledge} about {Romanian} {Synonyms} from {Language} {Learners}},
    url = {https://aclanthology.org/2021.nlp4call-1.1},
    urldate = {2022-01-31},
    booktitle = {Proceedings of the 10th {Workshop} on {NLP} for {Computer} {Assisted} {Language} {Learning}},
    publisher = {LiU Electronic Press},
    author = {Nicolas, Lionel and Aparaschivei, Lavinia Nicoleta and Lyding, Verena and Rodosthenous, Christos and Sangati, Federico and König, Alexander and Forascu, Corina},
    month = may,
    year = {2021},
    pages = {1--14},
}

@article{speer_conceptnet_2018,
    title = {{ConceptNet} 5.5: {An} {Open} {Multilingual} {Graph} of {General} {Knowledge}},
    shorttitle = {{ConceptNet} 5.5},
    url = {http://arxiv.org/abs/1612.03975},
    abstract = {Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked open data resource ConceptNet that is particularly well suited to be used with modern NLP techniques such as word embeddings. ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expert-created resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use. When ConceptNet is combined with word embeddings acquired from distributional semantics (such as word2vec), it provides applications with understanding that they would not acquire from distributional semantics alone, nor from narrower resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies.},
    urldate = {2022-01-31},
    journal = {arXiv:1612.03975 [cs]},
    author = {Speer, Robyn and Chin, Joshua and Havasi, Catherine},
    month = dec,
    year = {2018},
    note = {arXiv: 1612.03975},
    keywords = {Computer Science - Computation and Language, I.2.7},
}

@misc{noauthor_commonsenseconceptnet-numberbatch_2022,
    title = {commonsense/conceptnet-numberbatch},
    url = {https://github.com/commonsense/conceptnet-numberbatch},
    urldate = {2022-01-31},
    publisher = {commonsense},
    month = jan,
    year = {2022},
    note = {original-date: 2015-07-13T16:22:52Z},
}
aspeddro commented 2 years ago

Fixed in https://github.com/aspeddro/cmp-pandoc.nvim/commit/0bde46176e78d50773c02bcee8352ba19d3e7be3

TudorAndrei commented 2 years ago

It work now! Thank you!