tomv564 / pyls-mypy

Mypy plugin for the Python Language Server
MIT License
110 stars 62 forks source link

Error: Header must provide a Content-Length property. #22

Closed teto closed 4 years ago

teto commented 5 years ago

whenever I pull pyls-mypy in my python environment coc.nvim warns of a language server error:

|| [coc.nvim] error: Uncaught exception: Error: Header must provide a Content-Length property.
||     at StreamMessageReader.onData (/nix/store/nmpby0f5ryb09iri83xhbhrcsn9247hv-vimplugin-coc-nvim-2019-05-20/share/vim-plugins/coc-nvim/build/index.js:38575:27)
||     at Socket.readable.on (/nix/store/nmpby0f5ryb09iri83xhbhrcsn9247hv-vimplugin-coc-nvim-2019-05-20/share/vim-plugins/coc-nvim/build/index.js:38560:18)
||     at emitOne (events.js:116:13)
||     at Socket.emit (events.js:211:7)
||     at addChunk (_stream_readable.js:263:12)
||     at readableAddChunk (_stream_readable.js:250:11)
||     at Socket.Readable.push (_stream_readable.js:208:10)
||     at Pipe.onread (net.js:601:20)
|| [coc.nvim] error: Uncaught exception: Error: Header must provide a Content-Length property.
||     at StreamMessageReader.onData (/nix/store/nmpby0f5ryb09iri83xhbhrcsn9247hv-vimplugin-coc-nvim-2019-05-20/share/vim-plugins/coc-nvim/build/index.js:38575:27)
||     at Socket.readable.on (/nix/store/nmpby0f5ryb09iri83xhbhrcsn9247hv-vimplugin-coc-nvim-2019-05-20/share/vim-plugins/coc-nvim/build/index.js:38560:18)
||     at emitOne (events.js:116:13)
||     at Socket.emit (events.js:211:7)
||     at addChunk (_stream_readable.js:263:12)
||     at readableAddChunk (_stream_readable.js:250:11)
||     at Socket.Readable.push (_stream_readable.js:208:10)
||     at Pipe.onread (net.js:601:20)
|| [coc.nvim] error: Uncaught exception: Error: Header must provide a Content-Length property.
||     at StreamMessageReader.onData (/nix/store/nmpby0f5ryb09iri83xhbhrcsn9247hv-vimplugin-coc-nvim-2019-05-20/share/vim-plugins/coc-nvim/build/index.js:38575:27)
tomv564 commented 5 years ago

The python language server is responsible for formatting payloads, but perhaps this plugin has a way of eg. introducing two newlines in a response body that confuses coc.nvim's parser?

Can you find any explanation in the language server's logs?

teto commented 5 years ago

This is the full output of :CocInfo

## versions

vim version: NVIM v0.4.0-dev
node version: v8.16.0
coc.nvim version: 0.0.67-8741e930c9
term: xterm-termite
platform: linux

## Error messages
Uncaught exception: Error: Header must provide a Content-Length property.
    at StreamMessageReader.onData (/nix/store/kphinzihay7s4vrisg7n3jbykncqd467-vimplugin-coc-nvim-2019-05-26/share/vim-plugins/coc-nvim/build/index.js:38575:27)
    at Socket.readable.on (/nix/store/kphinzihay7s4vrisg7n3jbykncqd467-vimplugin-coc-nvim-2019-05-26/share/vim-plugins/coc-nvim/build/index.js:38560:18)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at addChunk (_stream_readable.js:263:12)
    at readableAddChunk (_stream_readable.js:250:11)
    at Socket.Readable.push (_stream_readable.js:208:10)
    at Pipe.onread (net.js:601:20)
## Output channel: snippets

[Error 12:28:52] Convert regex error for: li(st)? (?<num>\d+)$
[Error 12:28:52] Convert regex error for: ol(st)? (?<num>\d+)$

## Output channel: languageserver.python

[Trace - 12:28:52] Sending request 'initialize - (0)'.
Params: {
    "processId": 4479,
    "rootPath": "/home/teto/mptcpanalyzer",
    "rootUri": "file:///home/teto/mptcpanalyzer",
    "capabilities": {
        "workspace": {
            "applyEdit": true,
            "workspaceEdit": {
                "documentChanges": true,
                "resourceOperations": [
                    "create",
                    "rename",
                    "delete"
                ],
                "failureHandling": "textOnlyTransactional"
            },
            "didChangeConfiguration": {
                "dynamicRegistration": true
            },
            "didChangeWatchedFiles": {
                "dynamicRegistration": true
            },
            "symbol": {
                "dynamicRegistration": true,
                "symbolKind": {
                    "valueSet": [
                        1,
                        2,
                        3,
                        4,
                        5,
                        6,
                        7,
                        8,
                        9,
                        10,
                        11,
                        12,
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23,
                        24,
                        25,
                        26
                    ]
                }
            },
            "executeCommand": {
                "dynamicRegistration": true
            },
            "configuration": true,
            "workspaceFolders": true
        },
        "textDocument": {
            "publishDiagnostics": {
                "relatedInformation": true
            },
            "synchronization": {
                "dynamicRegistration": true,
                "willSave": true,
                "willSaveWaitUntil": true,
                "didSave": true
            },
            "completion": {
                "dynamicRegistration": true,
                "contextSupport": true,
                "completionItem": {
                    "snippetSupport": true,
                    "commitCharactersSupport": true,
                    "documentationFormat": [
                        "markdown",
                        "plaintext"
                    ],
                    "deprecatedSupport": true,
                    "preselectSupport": true
                },
                "completionItemKind": {
                    "valueSet": [
                        1,
                        2,
                        3,
                        4,
                        5,
                        6,
                        7,
                        8,
                        9,
                        10,
                        11,
                        12,
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23,
                        24,
                        25
                    ]
                }
            },
            "hover": {
                "dynamicRegistration": true,
                "contentFormat": [
                    "markdown",
                    "plaintext"
                ]
            },
            "signatureHelp": {
                "dynamicRegistration": true,
                "signatureInformation": {
                    "documentationFormat": [
                        "markdown",
                        "plaintext"
                    ],
                    "parameterInformation": {
                        "labelOffsetSupport": true
                    }
                }
            },
            "definition": {
                "dynamicRegistration": true
            },
            "references": {
                "dynamicRegistration": true
            },
            "documentHighlight": {
                "dynamicRegistration": true
            },
            "documentSymbol": {
                "dynamicRegistration": true,
                "symbolKind": {
                    "valueSet": [
                        1,
                        2,
                        3,
                        4,
                        5,
                        6,
                        7,
                        8,
                        9,
                        10,
                        11,
                        12,
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23,
                        24,
                        25,
                        26
                    ]
                }
            },
            "codeAction": {
                "dynamicRegistration": true,
                "codeActionLiteralSupport": {
                    "codeActionKind": {
                        "valueSet": [
                            "",
                            "quickfix",
                            "refactor",
                            "refactor.extract",
                            "refactor.inline",
                            "refactor.rewrite",
                            "source",
                            "source.organizeImports"
                        ]
                    }
                }
            },
            "codeLens": {
                "dynamicRegistration": true
            },
            "formatting": {
                "dynamicRegistration": true
            },
            "rangeFormatting": {
                "dynamicRegistration": true
            },
            "onTypeFormatting": {
                "dynamicRegistration": true
            },
            "rename": {
                "dynamicRegistration": true,
                "prepareSupport": true
            },
            "documentLink": {
                "dynamicRegistration": true
            },
            "typeDefinition": {
                "dynamicRegistration": true
            },
            "implementation": {
                "dynamicRegistration": true
            },
            "declaration": {
                "dynamicRegistration": true
            },
            "colorProvider": {
                "dynamicRegistration": true
            },
            "foldingRange": {
                "dynamicRegistration": true,
                "rangeLimit": 5000,
                "lineFoldingOnly": true
            }
        }
    },
    "initializationOptions": {},
    "trace": "verbose",
    "workspaceFolders": [
        {
            "uri": "file:///home/teto/mptcpanalyzer",
            "name": "mptcpanalyzer"
        }
    ]
}

[Trace - 12:28:53] Received response 'initialize - (0)' in 1097ms.
Result: {
    "capabilities": {
        "codeActionProvider": true,
        "codeLensProvider": {
            "resolveProvider": false
        },
        "completionProvider": {
            "resolveProvider": false,
            "triggerCharacters": [
                "."
            ]
        },
        "documentFormattingProvider": true,
        "documentHighlightProvider": true,
        "documentRangeFormattingProvider": true,
        "documentSymbolProvider": true,
        "definitionProvider": true,
        "executeCommandProvider": {
            "commands": []
        },
        "hoverProvider": true,
        "referencesProvider": true,
        "renameProvider": true,
        "signatureHelpProvider": {
            "triggerCharacters": [
                "(",
                ","
            ]
        },
        "textDocumentSync": 2,
        "experimental": {}
    }
}

[Trace - 12:28:53] Sending notification 'initialized'.
Params: {}

[Trace - 12:28:53] Sending notification 'workspace/didChangeConfiguration'.
Params: {
    "settings": {
        "pyls": {
            "enable": true,
            "trace": {
                "server": "verbose"
            },
            "commandPath": "",
            "configurationSources": [
                "pycodestyle"
            ],
            "plugins": {
                "jedi_completion": {
                    "enabled": true
                },
                "jedi_hover": {
                    "enabled": true
                },
                "jedi_references": {
                    "enabled": true
                },
                "jedi_signature_help": {
                    "enabled": true
                },
                "jedi_symbols": {
                    "enabled": true,
                    "all_scopes": true
                },
                "mccabe": {
                    "enabled": true,
                    "threshold": 15
                },
                "preload": {
                    "enabled": true
                },
                "pycodestyle": {
                    "enabled": true
                },
                "pylint": {
                    "enabled": false
                },
                "pydocstyle": {
                    "enabled": false,
                    "match": "(?!test_).*\\.py",
                    "matchDir": "[^\\.].*"
                },
                "pyflakes": {
                    "enabled": false
                },
                "rope_completion": {
                    "enabled": true
                },
                "yapf": {
                    "enabled": true
                }
            }
        }
    }
}

[Trace - 12:28:53] Sending notification 'textDocument/didOpen'.
Params: {
    "textDocument": {
        "uri": "file:///home/teto/mptcpanalyzer/mptcpanalyzer/cli.py",
        "languageId": "python",
        "version": 1,
        "text": "# -*- coding: utf8\n# PYTHON_ARGCOMPLETE_OK\n# vim: set et fenc=utf-8 ff=unix sts=4 sw=4 ts=4 :\n\n# Copyright 2015-2016 Université Pierre et Marie Curie\n# Copyright 2017 IIJ Initiative for Internet Japan\n#\n# Matthieu coudron, coudron@iij.ad.jp\n\"\"\"\n# the PYTHON_ARGCOMPLETE_OK line a few lines up can enable shell completion\nfor argparse scripts as explained in\n- http://dingevoninteresse.de/wpblog/?p=176\n\ntodo test https://github.com/jonathanslenders/python-prompt-toolkit/tree/master/examples/tutorial\n\"\"\"\nimport sys\nimport argparse\nimport logging\nimport os\nimport subprocess\nimport functools\nimport inspect\nfrom mptcpanalyzer.config import MpTcpAnalyzerConfig\nfrom mptcpanalyzer.tshark import TsharkConfig\nfrom mptcpanalyzer.version import __version__\nfrom mptcpanalyzer.parser import gen_bicap_parser, LoadSinglePcap, gen_pcap_parser, FilterStream, \\\n    MpTcpAnalyzerParser, with_argparser_test, MpTcpStreamId, TcpStreamId\nimport mptcpanalyzer.parser as mpparser\nimport mptcpanalyzer.data as mpdata\nfrom mptcpanalyzer.data import map_mptcp_connection, load_into_pandas, map_tcp_stream, \\\n    merge_mptcp_dataframes_known_streams, merge_tcp_dataframes_known_streams, \\\n    load_merged_streams_into_pandas, classify_reinjections, pandas_to_csv\nfrom mptcpanalyzer import RECEIVER_SUFFIX, SENDER_SUFFIX, _sender, _receiver\nfrom mptcpanalyzer.metadata import Metadata\nfrom mptcpanalyzer.connection import MpTcpConnection, TcpConnection, MpTcpMapping, TcpMapping, \\\n    ConnectionRoles, swap_role\nimport mptcpanalyzer.cache as mc\nfrom mptcpanalyzer.statistics import mptcp_compute_throughput, tcp_get_stats\nimport mptcpanalyzer as mp\nfrom mptcpanalyzer import PreprocessingActions, Protocol\nimport stevedore\nimport pandas as pd\nimport shlex\nimport traceback\nimport pprint\nimport textwrap\nimport readline\nimport numpy as np\nfrom typing import List, Any, Tuple, Dict, Callable, Set\nimport cmd2\nimport math\nfrom cmd2 import with_argparser, with_argparser_and_unknown_args, with_category, argparse_completer\nfrom enum import Enum, auto\nimport mptcpanalyzer.pdutils\nimport dataclasses\nfrom colorama import Fore, Back\nfrom mptcpanalyzer.debug import debug_dataframe\nfrom stevedore import extension\n\nplugin_logger = logging.getLogger(\"stevedore\")\nplugin_logger.addHandler(logging.StreamHandler())\n\n# log = logging.getLogger(__name__)\n\n# this catches the \"root\" logger which is the parent of all loggers\nlog = logging.getLogger()\n# ch = logging.StreamHandler()\n# formatter = logging.Formatter('%(name)s:%(levelname)s: %(message)s')\n# ch.setFormatter(formatter)\n\n# log.addHandler(ch)\n# log.setLevel(logging.DEBUG)\n# handler = logging.FileHandler(\"mptcpanalyzer.log\", delay=False)\n\n\nhistfile_size = 1000\n\n\nlogLevels = {\n    logging.getLevelName(level): level for level in [\n        mp.TRACE, logging.DEBUG, logging.INFO, logging.ERROR\n    ]\n}\n\n\nCAT_TCP = \"TCP related\"\nCAT_MPTCP = \"MPTCP related\"\nCAT_GENERAL = \"Tool\"\n\n\nFG_COLORS = {\n    'black': Fore.BLACK,\n    'red': Fore.RED,\n    'green': Fore.GREEN,\n    'yellow': Fore.YELLOW,\n    'blue': Fore.BLUE,\n    'magenta': Fore.MAGENTA,\n    'cyan': Fore.CYAN,\n    'white': Fore.WHITE,\n}\nBG_COLORS = {\n    'black': Back.BLACK,\n    'red': Back.RED,\n    'green': Back.GREEN,\n    'yellow': Back.YELLOW,\n    'blue': Back.BLUE,\n    'magenta': Back.MAGENTA,\n    'cyan': Back.CYAN,\n    'white': Back.WHITE,\n}\n\ncolor_off = Fore.RESET + Back.RESET\n\n# todo might be handy with async_update_prompt\n\ndef is_loaded(f):\n    \"\"\"\n    Decorator checking that dataset has correct columns\n    \"\"\"\n    @functools.wraps(f)\n    def wrapped(self, *args):\n\n        log.debug(\"Cheking if a pcap was already loaded\")\n        if self.data is not None:\n            return f(self, *args)\n        else:\n            raise mp.MpTcpException(\"Please load a pcap with `load_pcap` first\")\n        return\n    return wrapped\n\n\ndef experimental(f):\n    \"\"\"\n    Decorator checking that dataset has correct columns\n    \"\"\"\n\n    @functools.wraps(f)\n    def wrapped(self, *args, **kwargs):\n        print(\"WORK IN PROGRESS, RESULTS MAY BE WRONG\")\n        return f(self, *args, **kwargs)\n    return wrapped\n\n\n# introduced in cmd2 0.9.13\ndef provide_namespace(cmd2_instance):\n\n    myNs = argparse.Namespace()\n    myNs._dataframes = {\"pcap\": cmd2_instance.data.copy()}\n    return myNs\n\n\nclass MpTcpAnalyzerCmdApp(cmd2.Cmd):\n    \"\"\"\n    mptcpanalyzer can run into 3 modes:\n\n    #. interactive mode (default):\n        an interpreter with some basic completion will accept your commands.\n    There is also some help embedded.\n    #. if a filename is passed as argument, it will load commands from\n    this file otherwise, it will consider the unknow arguments as one command,\n     the same that could be used interactively\n    \"\"\"\n\n    intro = textwrap.dedent(\"\"\"\n        Press ? to list the available commands and `help <command>` or `<command> -h`\n        for a detailed help of the command\n        \"\"\".format(__version__))\n\n    def stevedore_error_handler(manager, entrypoint, exception):\n        print(\"Error while loading entrypoint [%s]\" % entrypoint)\n\n    def __init__(self, cfg: MpTcpAnalyzerConfig, stdin=sys.stdin, **kwargs) -> None:\n        \"\"\"\n        Args:\n            cfg (MpTcpAnalyzerConfig): A valid configuration\n\n        Attributes:\n            prompt (str): Prompt seen by the user, displays currently loaded pcpa\n            config: configution to get user parameters\n            data:  dataframe currently in use\n        \"\"\"\n\n        shortcuts = ({\n            'lm': 'list_mptcp_connections',\n            'lt': 'list_tcp_connections',\n            'ls': 'list_subflows',\n            'lr': 'list_reinjections'\n        })\n        super().__init__(completekey='tab', stdin=stdin, shortcuts=shortcuts)\n        self.prompt = FG_COLORS['blue'] + \"Ready>\" + color_off\n        self.data = None  # type: pd.DataFrame\n        self.config = cfg\n        self.tshark_config = TsharkConfig(\n            delimiter=cfg[\"mptcpanalyzer\"][\"delimiter\"],\n            profile=cfg[\"mptcpanalyzer\"][\"wireshark_profile\"],\n        )\n\n        # cmd2 specific initialization\n        self.abbrev = True  # when no ambiguities, run the command\n        self.allow_cli_args = True  # disable autoload of transcripts\n        self.allow_redirection = True  # allow pipes in commands\n        self.default_to_shell = False\n        self.debug = True  # for now\n        self.set_posix_shlex = True\n\n        # Pandas specific initialization\n        # for as long as https://github.com/pydata/numexpr/issues/331 is a problem\n        # does not seem to work :s\n        pd.set_option('compute.use_numexpr', False)\n        pd.set_option('display.max_info_columns', 5)  # verbose dataframe.info\n        log.debug(\"use numexpr? %d\" % pd.get_option('compute.use_numexpr', False))\n\n        #  Load Plots\n        ######################\n        # you can  list available plots under the namespace\n        # https://pypi.python.org/pypi/entry_point_inspector\n        # https://docs.openstack.org/stevedore/latest/reference/index.html#stevedore.extension.ExtensionManager\n        # mgr = driver.DriverManager(\n        self.plot_mgr = extension.ExtensionManager(\n            namespace='mptcpanalyzer.plots',\n            invoke_on_load=True,\n            verify_requirements=True,\n            invoke_args=(self.tshark_config,),\n            # invoke_kwds\n            propagate_map_exceptions=True,\n            on_load_failure_callback=self.stevedore_error_handler\n        )\n\n        self.cmd_mgr = extension.ExtensionManager(\n            namespace='mptcpanalyzer.cmds',\n            invoke_on_load=True,\n            verify_requirements=True,\n            invoke_args=(),\n            propagate_map_exceptions=False,\n            on_load_failure_callback=self.stevedore_error_handler\n        )\n\n        #  do_plot parser\n        ######################\n        # not my first choice but to accomodate cmd2 constraints\n        # see https://github.com/python-cmd2/cmd2/issues/498\n        subparsers = MpTcpAnalyzerCmdApp.plot_parser.add_subparsers(\n            dest=\"plot_type\", required=True,\n            title=\"Available plots\",\n            # prog= \"\",\n            # description=\"\",\n            parser_class=MpTcpAnalyzerParser,\n            help='Consult each plot\\'s help via its plot <PLOT_TYPE> -h flag.',\n        )\n\n        def register_plots(ext, subparsers):\n            \"\"\"Adds a parser per plot\"\"\"\n            # check if dat is loaded\n            parser = ext.obj.default_parser()\n            assert parser, \"Forgot to return parser\"\n            # we can pass an additionnal help\n            log.debug(\"Registering subparser for plot %s\" % ext.name)\n            subparsers.add_parser(\n                ext.name, parents=[parser],\n                # parents= just copies arguments, not the actual help !\n                description=parser.description,\n                epilog=parser.epilog,\n                add_help=False,\n            )\n\n        self.plot_mgr.map(register_plots, subparsers)\n        # will raise NoMatches when no plot available\n\n        # if loading commands from a file, we disable prompt not to pollute output\n        if stdin != sys.stdin:\n            log.info(\"Disabling prompt because reading from stdin\")\n            self.use_rawinput = False\n            self.prompt = \"\"\n            self.intro = \"\"\n\n        self.poutput(\"Run `checkhealth` in case of issues\")\n\n    def do_checkhealth(self, args):\n        if sys.hexversion <= 0x03070000:\n            self.perror(\"This program requires a newer python than %s\" % sys.version)\n\n        try:\n            self.poutput(\"Checking for tshark version >= 3.X.X ...\")\n\n            out = subprocess.check_output([\"tshark\", \"--version\"])\n            first_line = out.decode().splitlines()[0]\n            import re\n            m = re.search(\"([\\d.])\", first_line)\n            major_version = int(m.group(0))\n            self.poutput(\"found tshark major version %d\" % major_version)\n            if major_version < 3:\n                self.perror(\"Your tshark version seems too old ?!\")\n            else:\n                self.poutput(\"Your tshark version looks fine\")\n\n        except Exception as e:\n            self.poutput(\"An error happened while checking tshark version\")\n            self.poutput(\"Run `tshark -v` and check it's >= 3.0.0\")\n            self.perror(\"%s\" % e)\n\n\n    @property\n    def plot_manager(self):\n        return self.plot_mgr\n\n    @plot_manager.setter\n    def plot_manager(self, mgr):\n        \"\"\"\n        Override the default plot manager, only used for testing\n        :param mgr: a stevedore plugin manager\n        \"\"\"\n        self.plot_mgr = mgr\n\n    def load_plugins(self, mgr=None):\n        \"\"\"\n        This function monkey patches the class to inject Command plugins\n\n        Attrs:\n            mgr: override the default plugin manager when set.\n\n        Useful to run tests\n        \"\"\"\n        mgr = mgr if mgr is not None else self.cmd_mgr\n\n        def _inject_cmd(ext, data):\n            log.debug(\"Injecting plugin %s\" % ext.name)\n            for prefix in [\"do\", \"help\", \"complete\"]:\n                method_name = prefix + \"_\" + ext.name\n                try:\n                    obj = getattr(ext.obj, prefix)\n                    if obj:\n                        setattr(MpTcpAnalyzerCmdApp, method_name, obj)\n                except AttributeError:\n                    log.debug(\"Plugin does not provide %s\" % method_name)\n\n        # there is also map_method available\n        try:\n            mgr.map(_inject_cmd, self)\n        except stevedore.exception.NoMatches as e:\n            log.error(\"stevedore: No matches (%s)\" % e)\n\n    def precmd(self, line):\n        \"\"\"\n        Here we can preprocess line, with for instance shlex.split() ?\n        Note:\n            This is only called when using cmdloop, not with onecmd !\n        \"\"\"\n        # default behavior\n        print(\">>> %s\" % line)\n        return line\n\n    def cmdloop(self, intro=None):\n        \"\"\"\n        overrides baseclass just to be able to catch exceptions\n        \"\"\"\n        try:\n            sys_exit_code = super().cmdloop()\n        except KeyboardInterrupt as e:\n            pass\n\n        # Exception raised by sys.exit(), which is called by argparse\n        # we don't want the program to finish just when there is an input error\n        except SystemExit as e:\n            sys_exit_code = self.cmdloop()\n        except mp.MpTcpException as e:\n            print(e)\n            sys_exit_code = self.cmdloop()\n        except Exception as e:\n            log.critical(\"Unknown error, aborting...\")\n            log.critical(\"%s\" % e)\n            print(\"Displaying backtrace:\\n\")\n            traceback.print_exc()\n\n        return sys_exit_code\n\n    def postcmd(self, stop, line):\n        \"\"\"\n        Override baseclass\n        returning true will stop the program\n        \"\"\"\n        log.debug(\"postcmd result for line [%s] => %r\", line, stop)\n\n        return True if stop is True else False\n\n    parser = MpTcpAnalyzerParser(description=\"List subflows of an MPTCP connection\")\n    filter_stream = parser.add_argument(\n        \"mptcpstream\", action=\"store\", type=int,\n        help=\"Equivalent to wireshark mptcp.stream id\")\n    # TODO for tests only, fix\n    # setattr(filter_stream, argparse_completer.ACTION_ARG_CHOICES, [0, 1, 2])\n\n    @with_argparser(parser)\n    @with_category(CAT_MPTCP)\n    @is_loaded\n    def do_list_subflows(self, args):\n        \"\"\"\n        list mptcp subflows\n                [mptcp.stream id]\n\n        Example:\n            ls 0\n        \"\"\"\n        self.list_subflows(args.mptcpstream)\n\n    @is_loaded\n    def list_subflows(self, mptcpstreamid: MpTcpStreamId):\n\n        try:\n            con = MpTcpConnection.build_from_dataframe(self.data, mptcpstreamid)\n            msg = \"mptcp.stream %d has %d subflow(s) (client/server): \"\n            self.poutput(msg % (mptcpstreamid, len(con.subflows())))\n            for sf in con.subflows():\n                self.poutput(\"\\t%s\" % sf)\n        except mp.MpTcpMissingKey as e:\n            self.poutput(e)\n        except mp.MpTcpException as e:\n            self.perror(e)\n\n\n    parser = argparse_completer.ACArgumentParser(\n        description='''\n        This function tries to map a tcp.stream id from one pcap\n        to one in another pcap in another dataframe.\n    ''')\n\n    # TODO could use LoadSinglePcap\n    load_pcap1 = parser.add_argument(\"pcap1\", action=\"store\", help=\"first to load\")\n    load_pcap2 = parser.add_argument(\"pcap2\", action=\"store\", help=\"second pcap\")\n\n    # cmd2.Cmd.path_complete ?\n    # setattr(action_stream, argparse_completer.ACTION_ARG_CHOICES, range(0, 10))\n    # use path_filter\n    setattr(load_pcap1, argparse_completer.ACTION_ARG_CHOICES, ('path_complete', ))\n    setattr(load_pcap2, argparse_completer.ACTION_ARG_CHOICES, ('path_complete', ))\n\n    parser.add_argument(\"tcpstreamid\", action=\"store\", type=int,\n                        help=\"tcp.stream id visible in wireshark for pcap1\")\n    parser.add_argument(\"--json\", action=\"store_true\", default=False,\n                        help=\"Machine readable summary.\")\n    parser.add_argument('-v', '--verbose', dest=\"verbose\", default=False, action=\"store_true\",\n                        help=\"how to display each connection\")\n\n    parser.epilog = '''\n    Examples:\n        map_tcp_connection examples/client_1_tcp_only.pcap examples/server_1_tcp_only.pcap  0\n    '''\n\n    @with_argparser(parser)\n    @with_category(CAT_TCP)\n    def do_map_tcp_connection(self, args):\n\n        df1 = load_into_pandas(args.pcap1, self.tshark_config)\n        df2 = load_into_pandas(args.pcap2, self.tshark_config)\n\n        main_connection = TcpConnection.build_from_dataframe(df1, args.tcpstreamid)\n\n        mappings = map_tcp_stream(df2, main_connection)\n\n        self.poutput(\"Trying to map %s\" % (main_connection,))\n        self.poutput(\"%d mapping(s) found\" % len(mappings))\n\n        for match in mappings:\n\n            # formatted_output = main.format_mapping(match)\n            # output = \"{c1.tcpstreamid} <-> {c2.tcpstreamid} with score={score}\"\n            # formatted_output = output.format(\n            #     c1=main_connection,\n            #     c2=match,\n            #     score=score\n            # )\n            # print(formatted_output)\n            self.poutput(\"%s\" % str(match))\n\n    parser = MpTcpAnalyzerParser(\n        description=\"This function tries to map a mptcp.stream from a dataframe\"\n                    \"(aka pcap) to mptcp.stream\"\n                    \"in another dataframe. \"\n    )\n\n    load_pcap1 = parser.add_pcap(\"pcap1\", action=\"store\", help=\"first to load\")\n    load_pcap2 = parser.add_pcap(\"pcap2\", action=\"store\", help=\"second pcap\")\n\n    # setattr(load_pcap1, argparse_completer.ACTION_ARG_CHOICES, ('path_complete', ))\n    # setattr(load_pcap2, argparse_completer.ACTION_ARG_CHOICES, ('path_complete', ))\n    parser.add_argument(\"mptcpstreamid\", action=\"store\", type=mp.MpTcpStreamId, help=\"to filter\")\n    parser.add_argument(\"--trim\", action=\"store\", type=float, default=0,\n                        help=\"Remove mappings with a score below this threshold\")\n    parser.add_argument(\"--limit\", action=\"store\", type=int, default=2,\n                        help=\"Limit display to the --limit best mappings\")\n    parser.add_argument('-v', '--verbose', dest=\"verbose\", default=False, action=\"store_true\",\n                        help=\"display all candidates\")\n\n    parser.epilog = inspect.cleandoc('''\n        For example run:\n        > map_mptcp_connection examples/client_2_redundant.pcapng examples/server_2_redundant.pcapng 0\n    ''')\n    @with_argparser(parser)\n    @with_category(CAT_MPTCP)\n    def do_map_mptcp_connection(self, args):\n        \"\"\"\n        Tries to map mptcp.streams from different pcaps.\n        Score based mechanism\n\n        Todo:\n            - Limit number of displayed matches\n        \"\"\"\n\n        df1 = load_into_pandas(args.pcap1, self.tshark_config)\n        df2 = load_into_pandas(args.pcap2, self.tshark_config)\n\n        main_connection = MpTcpConnection.build_from_dataframe(df1, args.mptcpstreamid)\n        mappings = map_mptcp_connection(df2, main_connection)\n\n        self.poutput(\"%d mapping(s) found\" % len(mappings))\n        mappings.sort(key=lambda x: x.score, reverse=True)\n\n        for rank, match in enumerate(mappings):\n\n            if rank >= args.limit:\n                self.pfeedback(\"ignoring mappings left\")\n                break\n\n            winner_like = match.score == float('inf')\n\n            output = \"{c1.mptcpstreamid} <-> {c2.mptcpstreamid} with score={score} {extra}\"\n            formatted_output = output.format(\n                c1=main_connection,\n                c2=match.mapped,\n                score=FG_COLORS['red'] + str(match.score) + color_off,\n                extra=\" <-- should be a correct match\" if winner_like else \"\"\n            )\n\n            if match.score < args.trim:\n                continue\n\n            # match = MpTcpMapping(match.mapped, match.score, mapped_subflows)\n            def _print_subflow(x):\n                return \"\\n-\" + x[0].format_mapping(x[1])\n\n            formatted_output += ''.join([_print_subflow(x) for x in match.subflow_mappings])\n\n            self.poutput(formatted_output)\n\n\n\n    summary_parser = MpTcpAnalyzerParser(\n        description=\"Prints a summary of the mptcp connection\"\n    )\n    action_stream = summary_parser.add_argument(\n        \"tcpstream\", type=TcpStreamId, action=mp.parser.retain_stream(\"pcap\"),\n        help=\"tcp.stream id\")\n    summary_parser.epilog = inspect.cleandoc('''\n        Similar to wireshark's \"Follow -> TCP stream\"\n    ''')\n    # TODO fix that\n    @is_loaded  # type: ignore\n    @with_argparser_test(summary_parser, preload_pcap=True)\n    # @with_argparser(summary_parser, ns_provider=provide_namespace)\n    def do_tcp_summary(self, args, unknown):\n        self.poutput(\"Summary of TCP connection\")\n        df = self.data\n\n        con = df.tcp.connection(args.tcpstream)\n        con.fill_dest(df)\n\n        for dest in ConnectionRoles:\n            res = tcp_get_stats(\n                self.data, args.tcpstream,\n                dest,\n                False\n            )\n\n            self.poutput(res)\n\n\n    summary_parser = MpTcpAnalyzerParser(\n        description=\"Prints a summary of the mptcp connection\"\n    )\n    action_stream = summary_parser.filter_stream(\n        \"mptcpstream\",\n        # type=MpTcpStreamId,\n        protocol=mp.Protocol.MPTCP,\n        action=mp.parser.retain_stream(\"pcap\"),\n        help=\"mptcp.stream id\"\n    )\n    # action_stream = summary_parser.add_argument(\n    #     \"mptcpstream\", type=MpTcpStreamId, action=mp.parser.retain_stream(\"pcap\"),\n    #     help=\"mptcp.stream id\")\n    # TODO update the stream id autcompletion dynamically ?\n    # setattr(action_stream, argparse_completer.ACTION_ARG_CHOICES, range(0, 10))\n\n    # TODO use filter_dest instead\n    summary_parser.filter_destination()\n    # summary_parser.add_argument(\n    #     '--dest',\n    #     action=mpparser.AppendDestination,\n    #     help='Filter flows according to their direction'\n    #     '(towards the client or the server)'\n    #     'Depends on mptcpstream'\n    # )\n    summary_parser.add_argument(\"--json\", action=\"store_true\", default=False,\n        help=\"Machine readable summary.\")\n\n    @is_loaded  # type: ignore\n    @with_argparser_test(summary_parser, preload_pcap=True)\n    def do_mptcp_summary(self, args, unknown):\n        \"\"\"\n        Naive summary contributions of the mptcp connection\n        See summary_extended for more details\n        \"\"\"\n\n        df = self.data\n        mptcpstream = args.mptcpstream\n\n        df = df.mptcp.fill_dest(mptcpstream)\n\n        for destination in args.dest:\n            stats = mptcp_compute_throughput(\n                self.data, args.mptcpstream,\n                destination,\n                False\n            )\n\n            if args.json:\n                import json\n                val = json.dumps(dataclasses.asdict(stats), ensure_ascii=False)\n                self.poutput(val)\n                return\n\n            msg = \"mptcpstream %d transferred %d bytes towards %s.\"\n            self.poutput(msg % (stats.mptcpstreamid, stats.mptcp_throughput_bytes, destination))\n            for sf in stats.subflow_stats:\n                log.log(mp.TRACE, \"sf after computation: %r\" % sf)\n                self.poutput(\n                    \"tcpstream {} transferred {sf_tput} bytes out of {mptcp_tput}, \"\n                    \"accounting for {tput_ratio:.2f}%\".format(\n                        sf.tcpstreamid, sf_tput=sf.throughput_bytes,\n                        mptcp_tput=stats.mptcp_throughput_bytes,\n                        tput_ratio=sf.throughput_contribution*100\n                    ))\n\n\n\n    parser = gen_pcap_parser({\"pcap\": PreprocessingActions.Preload})\n    parser.description = \"Export connection(s) to CSV\"\n    parser.epilog = '''\n\n    '''\n    parser.add_argument(\"output\", action=\"store\", help=\"Output filename\")\n\n    group = parser.add_mutually_exclusive_group(required=False)\n    group.add_argument('--tcpstream', action=functools.partial(FilterStream, \"pcap\", False),\n            type=TcpStreamId)\n    group.add_argument('--mptcpstream', action=functools.partial(FilterStream, \"pcap\", True),\n            type=MpTcpStreamId)\n\n    # TODO check ? use AppendDestination\n    parser.add_argument(\"--destination\", action=\"store\",\n        choices=mp.DestinationChoice,\n        help=\"tcp.stream id visible in wireshark\")\n    parser.add_argument(\"--drop-syn\", action=\"store_true\", default=False,\n        help=\"Helper just for my very own specific usecase\")\n\n    @is_loaded\n    @with_argparser(parser)\n    def do_tocsv(self, args):\n        \"\"\"\n        Selects tcp/mptcp/udp connection and exports it to csv\n        \"\"\"\n\n        df = self.data\n        # need to compute the destinations before dropping syn from the dataframe\n        for streamid, subdf in df.groupby(\"tcpstream\"):\n            con = df.tcp.connection(streamid)\n            df = con.fill_dest(df)\n\n            if args.drop_syn:\n                # use subdf ?\n                self.poutput(\"drop-syn Unsupported yet\")\n                df.drop(subdf.head(3).index, inplace=True)\n                # drop 3 first packets of each connection ?\n                # this should be a filter\n                syns = df[df.tcpflags == mp.TcpFlags.SYN]\n\n        self.poutput(\"Writing to %s\" % args.output)\n        pandas_to_csv(df, args.output)\n\n\n\n    sumext_parser = gen_bicap_parser(mp.Protocol.MPTCP, True)\n    sumext_parser.add_argument(\"--json\", action=\"store_true\", default=False,\n        help=\"Machine readable summary.\")\n    sumext_parser.description = inspect.cleandoc(\"\"\"\n        Look into more details of an mptcp connection.\n        Requires to have both server and client pcap.\n    \"\"\")\n    sumext_parser.epilog = inspect.cleandoc(\"\"\"\n        > summary_extended examples/client_2_redundant.pcapng 0 examples/server_2_redundant.pcapng 0\n    \"\"\")\n    @with_argparser_test(sumext_parser, preload_pcap=False)  # type: ignore\n    def do_summary_extended(self, args, unknown):\n        \"\"\"\n        Summarize contributions of each subflow\n        For now it is naive, does not look at retransmissions ?\n        \"\"\"\n\n        self.poutput(\"Summary extended of mptcp connection \")\n        df_pcap1 = load_into_pandas(args.pcap1, self.tshark_config)\n\n        # to abstract things a bit\n        destinations = args.pcap_destinations\n        # or list(mp.ConnectionRoles)\n\n        # TODO already be done BUT NOT THE CASE FOR GOD's SAKE !\n        # TODO we should have the parser do it\n        df = load_merged_streams_into_pandas(\n            args.pcap1,\n            args.pcap2,\n            args.pcap1stream,\n            args.pcap2stream,\n            True,\n            self.tshark_config\n        )\n\n\n        for destination in destinations:\n\n            stats = mptcp_compute_throughput(\n                df,\n                args.pcap1stream,\n                destination=destination,\n                merged_df=True,\n            )\n\n            if args.json:\n                import json\n                val = json.dumps(dataclasses.asdict(stats), ensure_ascii=False)\n                self.poutput(val)\n                return\n\n            total_transferred = stats.mptcp_throughput_bytes\n            msg = (\"mptcpstream {c.mptcpstreamid} towards {destination} forwarded \"\n                   \"{c.mptcp_throughput_bytes} bytes with a goodput of {c.mptcp_goodput_bytes}\")\n            self.poutput(msg.format(c=stats, destination=destination.name))\n\n            msg = inspect.cleandoc(\"\"\"\n            tcpstream {sf.tcpstreamid} analysis:\n            - throughput: transferred {sf.throughput_bytes} out of {mptcp.mptcp_throughput_bytes} mptcp bytes, accounting for {mptcp_tput_ratio:.2f}% of MPTCP throughput\n            - goodput: transferred {sf.mptcp_goodput_bytes} out of {mptcp.mptcp_goodput_bytes}, accounting for {mptcp_gput_ratio:.2f}% of MPTCP goodput\n            \"\"\")\n\n            for subflow in stats.subflow_stats:\n\n                self.poutput(\n                    msg.format(\n                        mptcp=stats, sf=subflow,\n                        mptcp_tput_ratio=subflow.throughput_contribution * 100,\n                        mptcp_gput_ratio=subflow.goodput_contribution * 100,\n                    )\n                )\n\n    @is_loaded\n    @with_category(CAT_TCP)\n    def do_list_tcp_connections(self, *args):\n        \"\"\"\n        List tcp connections via their ids (tcp.stream)\n        \"\"\"\n        streams = self.data.groupby(\"tcpstream\")\n        self.poutput('%d tcp connection(s)' % len(streams))\n        for tcpstream, group in streams:\n            # self.list_subflows(mptcpstream)\n            # self.data.tcp.connection(tcpstream)\n            con = TcpConnection.build_from_dataframe(self.data, tcpstream)\n            self.poutput(con)\n\n    @is_loaded\n    @with_category(CAT_MPTCP)\n    def do_list_mptcp_connections(self, *args):\n        \"\"\"\n        List mptcp connections via their ids (mptcp.stream)\n        \"\"\"\n        # print(self.data.head())\n        streams = self.data.groupby(\"mptcpstream\")\n        self.poutput('%d mptcp connection(s)' % len(streams))\n        for mptcpstream, group in streams:\n            self.list_subflows(mptcpstream)\n            self.poutput(\"\\n\")\n\n\n    parser = MpTcpAnalyzerParser(\n        description=\"Export a pcap that can be used with wireshark to debug ids\"\n    )\n    load_pcap1 = parser.add_argument(\"imported_pcap\", type=str,\n        help=\"Capture file to cleanup.\")\n    setattr(load_pcap1, argparse_completer.ACTION_ARG_CHOICES, ('path_complete', ))\n    parser.add_argument(\"exported_pcap\", type=str, help=\"Cleaned up file\")\n\n    @with_argparser(parser)\n    def do_clean_pcap(self, args):\n        \"\"\"\n        toto\n        \"\"\"\n        msg = \"Exporting a clean version of {} in {}\"\n        self.poutput(msg.format(args.imported_pcap, args.exported_pcap))\n\n        self.tshark_config.filter_pcap(args.imported_pcap, args.exported_pcap)\n\n    # TODO it should be able to print for both\n    parser = gen_bicap_parser(mp.Protocol.TCP, True)\n    parser.description = inspect.cleandoc(\"\"\"\n        This function tries merges a tcp stream from 2 pcaps\n        in an attempt to print owds. See map_tcp_connection first maybe.\n    \"\"\")\n\n    # TODO add a limit of packets or use ppaged()\n    # parser.add_argument(\"protocol\", action=\"store\", choices=[\"mptcp\", \"tcp\"],\n    #     help=\"tcp.stream id visible in wireshark\")\n    # give a choice \"hash\" / \"stochastic\"\n    parser.add_argument(\n        '-v', '--verbose', dest=\"verbose\", default=False,\n        action=\"store_true\",\n        help=\"how to display each connection\"\n    )\n    parser.add_argument(\"--csv\", action=\"store\", default=None,\n        help=\"Machine readable summary.\")\n    parser.epilog = inspect.cleandoc('''\n    You can run for example:\n        map_tcp_connection examples/client_1_tcp_only.pcap examples/server_1_tcp_only.pcap  0\n    ''')\n    @with_argparser(parser)\n    @experimental\n    def do_print_owds(self, args):\n        \"\"\"\n        TODO options to diagnose errors:\n        - print unmapped packets\n        - print abnormal OWDs (negative etc)\n        \"\"\"\n\n        self.poutput(\"Loading merged streams\")\n        df = args._dataframes[\"pcap\"]\n        result = df\n        debug_dataframe(result, \"merged stream\")\n\n        # print(result[mpdata.TCP_DEBUG_FIELDS].head(20))\n        # for key, subdf in df.groupby(_sender(\"tcpdest\"))\n\n        # todo sort by chronological order ?\n        # for row in df.itertuples();\n            # self.ppaged()\n\n        if args.csv:\n            self.poutput(\"Exporting to csv\")\n            with open(args.csv, \"w\") as fd:\n                df.to_csv(\n                    fd,\n                    sep=\"|\",\n                    index=False,\n                    header=True,\n                )\n\n        # print unmapped packets\n        print(\"print_owds finished\")\n        # print(\"TODO display before doing plots\")\n        # TODO display errors\n        # with pd.set_option('precision', 20):\n        # with pd.option_context('float_format', '{:f}'.format):\n        with pd.option_context('precision', 10):\n            print(result[[\"owd\"]].head(20))\n        mpdata.print_weird_owds(result)\n\n\n    parser = gen_bicap_parser(Protocol.MPTCP, dest=True)\n    parser.description = inspect.cleandoc(\"\"\"\n        Qualify reinjections of the connection.\n        You might want to run map_mptcp_connection first to find out\n        what map to which\n    \"\"\")\n    parser.epilog = inspect.cleandoc(\"\"\"\n    > qualify_reinjections examples/client_2_redundant.pcapng 1 examples/server_2_redundant.pcapng 1\n    \"\"\")\n    parser.add_argument(\"--failed\", action=\"store_true\", default=False,\n        help=\"List failed reinjections too.\")\n    parser.add_argument(\"--csv\", action=\"store_true\", default=False,\n        help=\"Machine readable summary.\")\n    parser.add_argument(\"--debug\", action=\"store_true\", default=False,\n        help=\"Explain decision for every reinjection.\")\n\n    @with_argparser_and_unknown_args(parser)\n    @with_category(CAT_MPTCP)\n    def do_qualify_reinjections(self, args, unknown):\n        \"\"\"\n        test with:\n\n        \"\"\"\n\n        print(\"Qualifying reinjections for stream in destination:\")\n        destinations = args.pcap_destinations\n        print(\"Looking at destinations %s\" % destinations)\n\n        df_all = args._dataframes[\"pcap\"]\n\n        print(\"TOTO\")\n        print(df_all.head())\n\n        # TODO this should be done automatically right ?\n        # remove later\n        df_all = load_merged_streams_into_pandas(\n            args.pcap1,\n            args.pcap2,\n            args.pcap1stream,\n            args.pcap2stream,\n            mptcp=True,\n            tshark_config=self.tshark_config\n        )\n        # con = rawdf.mptcp.connection(mptcpstreamid)\n        # q = con.generate_direction_query(destination)\n\n        # adds a redundant column\n        df = classify_reinjections(df_all)\n\n        # print(df_all[ pd.notnull(df_all[_sender(\"reinjection_of\")])] [\n        #     _sender([\"reinjection_of\", \"reinjected_in\", \"packetid\", \"reltime\"]) +\n        #     _receiver([\"packetid\", \"reltime\"])\n        # ])\n\n        def _print_reinjection_comparison(original_packet, reinj, ):\n            \"\"\"\n            Expects tuples of original and reinjection packets\n            \"\"\"\n            # original_packet  = sender_df.loc[ sender_df.packetid == initial_packetid, ].iloc[0]\n            row = reinj\n\n            reinjection_packetid = getattr(row, _sender(\"packetid\"))\n            reinjection_start    = getattr(row, _sender(\"abstime\"))\n            reinjection_arrival  = getattr(row, _receiver(\"abstime\"))\n            original_start       = original_packet[_sender(\"abstime\")]\n            original_arrival     = original_packet[_receiver(\"abstime\")]\n\n            if reinj.redundant is False:\n                # print(original_packet[\"packetid\"])\n                msg = (\"packet {pktid} is a successful reinjection of {initial_packetid}.\"\n                       \" It arrived at {reinjection_arrival} to compare with {original_arrival}\"\n                       \" while being transmitted at {reinjection_start} to compare with \"\n                       \"{original_start}, i.e., {reinj_delta} before\")\n                # TODO use assert instead\n                if getattr(row, _receiver(\"abstime\")) > original_packet[_receiver(\"abstime\")]:\n                    print(\"BUG: this is not a valid reinjection after all ?\")\n\n            elif args.failed:\n                # only de\n                msg = \"packet {pktid} is a failed reinjection of {initial_packetid}.\"\n            else:\n                return\n\n            msg = msg.format(\n                pktid               = reinjection_packetid,\n                initial_packetid    = initial_packetid,\n\n                reinjection_start   = reinjection_start,\n                reinjection_arrival = reinjection_arrival,\n                original_start      = original_start,\n                original_arrival    = original_arrival,\n                reinj_delta         = reinj.reinj_delta,\n            )\n            self.poutput(msg)\n\n\n        # with pd.option_context('display.max_rows', None, 'display.max_columns', 300):\n        #     print(reinjected_packets[[\"packetid\", \"packetid_receiver\", *_receiver([\"reinjected_in\", \"reinjection_of\"])]].head())\n        # TODO filter depending on --failed and --destinations\n\n        if args.csv:\n            self.pfeedback(\"Exporting to csv\")\n            # keep redundant\n            # only export a subset ?\n            # smalldf = df.drop()\n            columns = _sender([\"abstime\", \"reinjection_of\", \"reinjected_in\", \"packetid\", \"tcpstream\", \"mptcpstream\", \"tcpdest\", \"mptcpdest\"])\n            columns += _receiver([\"abstime\", \"packetid\"])\n            columns += [\"redundant\", \"owd\", \"reinj_delta\"]\n\n            df[columns].to_csv(\n                self.stdout,\n                sep=\"|\",\n                index=False,\n                header=True,\n            )\n            return\n\n        # TODO  use args.mptcp_destinations instead\n        # TODO revert\n        # destinations = [ ConnectionRoles.Server ]\n        for destination in destinations:\n\n            self.poutput(\"looking for reinjections towards mptcp %s\" % destination)\n            sender_df = df[df.mptcpdest == destination]\n            log.debug(\"%d packets in that direction\", len(sender_df))\n\n            # TODO we now need to display successful reinjections\n            reinjections = sender_df[pd.notnull(sender_df[_sender(\"reinjection_of\")])]\n            # self.poutput(\"looking for reinjections towards mptcp %s\" % destination)\n\n            successful_reinjections = reinjections[reinjections.redundant == False]\n\n            self.poutput(\"%d successful reinjections\" % len(successful_reinjections))\n            # print(successful_reinjections[ _sender([\"packetid\", \"reinjection_of\"]) + _receiver([\"packetid\"]) ])\n\n            for row in reinjections.itertuples(index=False):\n\n                # loc ? this is an array, sort it and take the first one ?\n                initial_packetid = row.reinjection_of[0]\n                # print(\"initial_packetid = %r %s\" % (initial_packetid, type(initial_packetid)))\n\n                original_packet = df_all.loc[df_all.packetid == initial_packetid].iloc[0]\n                # print(\"original packet = %r %s\" % (original_packet, type(original_packet)))\n\n                # if row.redundant == True and args.failed:\n                    # _print_failed_reinjection(original_packet, row, debug=args.debug)\n\n                _print_reinjection_comparison(original_packet, row, )\n\n\n    parser = MpTcpAnalyzerParser(\n        description=\"Listing reinjections of the connection\"\n    )\n    parser.epilog = \"Hello there\"\n    # action= filter_stream\n    # TODO check it is taken into account\n    # type=MpTcpStreamId, help=\"mptcp.stream id\")\n    parser.filter_stream(\"mptcpstream\", protocol=Protocol.MPTCP)\n    parser.add_argument(\"--summary\", action=\"store_true\", default=False,\n            help=\"Just count reinjections\")\n\n    @is_loaded  # type: ignore\n    @with_argparser_test(parser, preload_pcap=True)\n    @with_category(CAT_MPTCP)\n    def do_list_reinjections(self, args, unknown):\n        \"\"\"\n        List reinjections\n        We want to be able to distinguish between good and bad reinjections\n        (like good and bad RTOs).\n        A good reinjection is a reinjection for which either:\n        - the segment arrives first at the receiver\n        - the cumulative DACK arrives at the sender sooner thanks to that reinjection\n\n        To do that, we need to take into account latencies\n\n        \"\"\"\n\n        df = self.pcap\n        # df = self.data[df.mptcpstream == args.mptcpstream]\n        if df.empty:\n            self.poutput(\"No packet with mptcp.stream == %d\" % args.mptcpstream)\n            return\n\n        # TODO move to outer function ?\n        # TODO use ppaged\n        reinjections = df.dropna(axis=0, subset=[\"reinjection_of\"])\n        output = \"\"\n        for row in reinjections.itertuples():\n            output += (\"packetid=%d (tcp.stream %d) is a reinjection of %d packet(s):\\n\" %\n                (row.packetid, row.tcpstream, len(row.reinjection_of)))\n\n            # assuming packetid is the index\n            for pktId in row.reinjection_of:\n                entry = self.data.loc[pktId]\n                output += (\"- packet %d (tcp.stream %d)\\n\" % (entry.packetid, entry.tcpstream))\n            # known.update([row.packetid] + row.reinjection)\n\n        self.ppaged(output,)\n        # reinjections = df[\"reinjection_of\"].dropna(axis=0, )\n        # print(\"number of reinjections of \")\n\n\n    parser = MpTcpAnalyzerParser(\n        description=\"Loads a pcap to analyze\"\n    )\n    parser.add_pcap(\"input_file\")\n    # parser.add_argument(\"input_file\", action=LoadSinglePcap,\n    #     help=\"Either a pcap or a csv file.\"\n    #     \"When a pcap is passed, mptcpanalyzer looks for a cached csv\"\n    #     \"else it generates a \"\n    #     \"csv from the pcap with the external tshark program.\"\n    # )\n    @with_argparser(parser)\n    def do_load_pcap(self, args):\n        \"\"\"\n        Load the file as the current one\n        \"\"\"\n        # print(args)\n\n        self.poutput(\"Loading %s\" % args.input_file)\n        self.data = args._dataframes[\"input_file\"]\n        self.prompt = \"%s> \" % os.path.basename(args.input_file)\n\n    def do_list_available_plots(self, args):\n        \"\"\"\n        Print available plots. Mostly for debug, you should use 'plot'.\n        \"\"\"\n        plot_names = self.list_available_plots()\n        print(plot_names)\n\n    def list_available_plots(self):\n        return self.plot_mgr.names()\n\n    def pcap_loaded(self):\n        return isinstance(self.data, pd.DataFrame)\n\n\n    plot_parser = MpTcpAnalyzerParser(prog='plot', description='Generate plots')\n    # TODO complete the help\n    # plot throughput tcp examples/client_2_redundant.pcapng 0 examples/server_2_redundant.pcapng 0 3\" \"quit\"\n    plot_parser.epilog = inspect.cleandoc('''\n        Here are a few plots you can create:\n\n        To plot tcp attributes:\n        > plot tcp_attr examples/client_2_filtered.pcapng 0 tcpseq\n\n        To plot one way delays, you need 2 pcaps: from the client and the server side. Then you can run:\n        > plot owd tcp examples/client_2_filtered.pcapng 0 examples/server_2_filtered.pcapng 0 --display\n    ''')\n    @with_argparser_and_unknown_args(plot_parser)\n    def do_plot(self, args, unknown):\n        \"\"\"\n        global member used by others do_plot members *\n        Loads required dataframes when necessary\n        \"\"\"\n\n        # Allocate plot object\n        plotter = self.plot_mgr[args.plot_type].obj\n\n        # TODO reparse with the definitive parser ?\n\n        # 'converts' the namespace to for the syntax define a dict\n        dargs = vars(args)\n\n        dataframes = dargs.pop(\"_dataframes\", {})\n\n        # TODO move to parser\n        for pcap, df in dataframes.items():\n            res = dargs.pop(pcap, None)\n            if res:\n                log.debug(\"Popping %s to prevent a duplicate with the one from _dataframes\" % pcap)\n\n        # dataframes = args._dataframes.values()\n        assert dataframes is not None, \"Preprocess must return a list\"\n        # pass unknown_args too ?\n        try:\n            # TODO pretty print\n            # pp = pprint.PrettyPrinter(indent=4)\n            log.debug(\"Calling plot with dataframes:\\n%s and dargs %s\" % (\n                dataframes.keys(), dargs\n            ))\n            result = plotter.run(**dataframes, **dargs)\n        except TypeError as e:\n            self.perror(\"Problem when calling plotter.run\")\n            self.perror(\"We passed the following arguments:\")\n            print(dataframes)\n            print(dargs)\n            raise e\n\n        self.pfeedback(\"result %r\" % result)\n        # to save to file for instance\n        plotter.postprocess(result, **dargs)\n\n    @with_category(CAT_GENERAL)\n    def do_clean_cache(self, line):\n        \"\"\"\n        mptcpanalyzer saves pcap to csv converted files in a cache folder, (most likely\n        $XDG_CACHE_HOME/mptcpanalyzer). This commands clears the cache.\n        \"\"\"\n        cache = mp.get_cache()\n        self.poutput(\"Cleaning cache [%s]\" % cache.folder)\n        cache.clean()\n\n    def do_dump(self, args):\n        \"\"\"\n        Dumps content of the csv file, with columns selected by the user.\n        Mostly used for debug\n        \"\"\"\n        parser = argparse.ArgumentParser(description=\"dumps csv content\")\n        parser.add_argument('columns', default=[\n            \"ipsrc\", \"ipdst\"], choices=self.data.columns, nargs=\"*\")\n\n        parser.add_argument('-n', default=10, action=\"store\",\n                help=\"Number of results to display\")\n        args = parser.parse_args(shlex.split(args))\n        print(self.data[args.columns])\n\n    def complete_dump(self, text, line, begidx, endidx):\n        \"\"\"\n        Should return a list of possibilities\n        \"\"\"\n        matches = [x for x in self.data.columns if x.startswith(text)]\n        return matches\n\n    # not needed in cmd2\n    def do_quit(self, *args):\n        \"\"\"\n        Quit/exit program\n        \"\"\"\n        print(\"Thanks for flying with mptcpanalyzer.\")\n        return True\n\n    def do_EOF(self, line):\n        \"\"\"\n        Keep it to be able to exit with CTRL+D\n        \"\"\"\n        return True\n\n    def preloop(self):\n        \"\"\"\n        Executed once when cmdloop is called\n        \"\"\"\n        histfile = self.config[\"mptcpanalyzer\"]['history']\n        if readline and os.path.exists(histfile):\n            log.debug(\"Loading history from %s\" % histfile)\n            readline.read_history_file(histfile)\n\n    def postloop(self):\n        histfile = self.config[\"mptcpanalyzer\"]['history']\n        if readline:\n            log.debug(\"Saving history to %s\" % histfile)\n            readline.set_history_length(histfile_size)\n            readline.write_history_file(histfile)\n\ndef main(arguments: List[str] = None):\n    \"\"\"\n    This is the entry point of the program\n\n    Args:\n        arguments: Made as a parameter since it makes testing easier\n\n    Returns:\n        return value will be passed to sys.exit\n    \"\"\"\n\n    if not arguments:\n        arguments = sys.argv[1:]\n\n    parser = argparse.ArgumentParser(\n        description='Generate MPTCP (Multipath Transmission Control Protocol) stats & plots',\n        epilog=\"You can report issues at https://github.com/teto/mptcpanalyzer\",\n    )\n\n    parser.add_argument(\n        \"--load\", \"-l\", dest=\"input_file\",\n        help=\"Either a pcap or a csv file (in good format).\"\n        \"When a pcap is passed, mptcpanalyzer will look for a its cached csv.\"\n        \"If it can't find one (or with the flag --regen), it will generate a \"\n        \"csv from the pcap with the external tshark program.\"\n    )\n    parser.add_argument('--version', action='version', version=\"%s\" % (__version__))\n    parser.add_argument(\n        \"--config\", \"-c\", action=\"store\",\n        help=\"Path towards the config file. If not set, mptcpanalyzer will try\"\n        \" to load first $XDG_CONFIG_HOME/mptcpanalyzer/config and then \"\n        \" $HOME/.config/mptcpanalyzer/config\"\n    )\n    parser.add_argument(\n        \"--debug\", \"-d\", choices=logLevels.keys(),\n        default=logging.getLevelName(logging.ERROR),\n        help=\"More verbose output, can be repeated to be even more \"\n        \" verbose such as '-dddd'\"\n    )\n    parser.add_argument(\n        \"--no-cache\", \"--regen\", \"-r\", action=\"store_true\", default=False,\n        help=\"mptcpanalyzer creates a cache of files in the folder \"\n        \"$XDG_CACHE_HOME/mptcpanalyzer or ~/.config/mptcpanalyzer.\"\n        \"Force the regeneration of the cached CSV file from the pcap input\"\n    )\n    parser.add_argument(\n        \"--cachedir\", action=\"store\", type=str,\n        help=\"mptcpanalyzer creates a cache of files in the folder \"\n        \"$XDG_CACHE_HOME/mptcpanalyzer.\"\n    )\n\n    args, unknown_args = parser.parse_known_args(arguments)\n\n    # remove from sys.argv arguments already processed by argparse\n    sys.argv = sys.argv[:1] + unknown_args\n\n    config = MpTcpAnalyzerConfig(args.config)\n\n    # TODO use sthg better like flent/alot do (some update mechanism for instance)\n    if args.cachedir:\n        config[\"mptcpanalyzer\"][\"cache\"] = args.cachedir  # type: ignore\n\n    # setup global variables\n    mp.__CACHE__ = mc.Cache(config.cachedir, disabled=args.no_cache)\n    mp.__CONFIG__ = config\n\n    print(\"Setting log level to %s\" % args.debug)\n    log.setLevel(logLevels[args.debug])\n    # logging.basicConfig(format='%(levelname)s:%(message)s', level=logLevels[args.debug])\n\n    log.debug(\"Starting in folder %s\" % os.getcwd())\n    # logging.debug(\"Pandas version: %s\" % pd.show_versions())\n    log.debug(\"Pandas version: %s\" % pd.__version__)\n    log.debug(\"cmd2 version: %s\" % cmd2.__version__)\n\n    try:\n\n        analyzer = MpTcpAnalyzerCmdApp(config, **vars(args))\n\n        # enable cmd2 debug only when required\n        analyzer.debug = logLevels[args.debug] <= logging.DEBUG\n\n        # could be moved to the class ?\n        if args.input_file:\n            analyzer.onecmd(\"load_pcap %s\" % args.input_file)\n\n        log.info(\"Starting interactive mode\")\n        exit_code = analyzer.cmdloop()\n        print(\"Exit code:\", exit_code)\n\n    except Exception as e:\n        print(\"An error happened:\\n%s\" % e)\n        print(\"Displaying backtrace:\\n\")\n        traceback.print_exc()\n        return 1\n\n    return exit_code\n\n\nif __name__ == '__main__':\n    main()\n"
    }
}

[Trace - 12:28:54] Sending notification 'textDocument/didSave'.
Params: {
    "textDocument": {
        "uri": "file:///home/teto/mptcpanalyzer/mptcpanalyzer/cli.py",
        "version": 1
    }
}

[Trace - 12:28:59] Received notification 'textDocument/publishDiagnostics'.
Params: {
    "uri": "file:///home/teto/mptcpanalyzer/mptcpanalyzer/cli.py",
    "diagnostics": [
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 287,
                    "character": 29
                },
                "end": {
                    "line": 287,
                    "character": 49
                }
            },
            "message": "W605 invalid escape sequence '\\d'",
            "code": "W605",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 747,
                    "character": 120
                },
                "end": {
                    "line": 747,
                    "character": 170
                }
            },
            "message": "E501 line too long (169 > 120 characters)",
            "code": "E501",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 748,
                    "character": 120
                },
                "end": {
                    "line": 748,
                    "character": 152
                }
            },
            "message": "E501 line too long (151 > 120 characters)",
            "code": "E501",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 848,
                    "character": 12
                },
                "end": {
                    "line": 848,
                    "character": 28
                }
            },
            "message": "E116 unexpected indentation (comment)",
            "code": "E116",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 933,
                    "character": 29
                },
                "end": {
                    "line": 933,
                    "character": 68
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 934,
                    "character": 31
                },
                "end": {
                    "line": 934,
                    "character": 70
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 935,
                    "character": 26
                },
                "end": {
                    "line": 935,
                    "character": 71
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 936,
                    "character": 28
                },
                "end": {
                    "line": 936,
                    "character": 73
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 955,
                    "character": 21
                },
                "end": {
                    "line": 955,
                    "character": 60
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 955,
                    "character": 37
                },
                "end": {
                    "line": 955,
                    "character": 60
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 956,
                    "character": 32
                },
                "end": {
                    "line": 956,
                    "character": 56
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 956,
                    "character": 37
                },
                "end": {
                    "line": 956,
                    "character": 56
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 958,
                    "character": 33
                },
                "end": {
                    "line": 958,
                    "character": 57
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 958,
                    "character": 37
                },
                "end": {
                    "line": 958,
                    "character": 57
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 959,
                    "character": 35
                },
                "end": {
                    "line": 959,
                    "character": 59
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 959,
                    "character": 37
                },
                "end": {
                    "line": 959,
                    "character": 59
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 960,
                    "character": 30
                },
                "end": {
                    "line": 960,
                    "character": 54
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 960,
                    "character": 37
                },
                "end": {
                    "line": 960,
                    "character": 54
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 961,
                    "character": 32
                },
                "end": {
                    "line": 961,
                    "character": 56
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 961,
                    "character": 37
                },
                "end": {
                    "line": 961,
                    "character": 56
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 962,
                    "character": 27
                },
                "end": {
                    "line": 962,
                    "character": 57
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 962,
                    "character": 37
                },
                "end": {
                    "line": 962,
                    "character": 57
                }
            },
            "message": "E251 unexpected spaces around keyword / parameter equals",
            "code": "E251",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 955,
                    "character": 21
                },
                "end": {
                    "line": 955,
                    "character": 60
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 956,
                    "character": 32
                },
                "end": {
                    "line": 956,
                    "character": 56
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 958,
                    "character": 33
                },
                "end": {
                    "line": 958,
                    "character": 57
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 960,
                    "character": 30
                },
                "end": {
                    "line": 960,
                    "character": 54
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 961,
                    "character": 32
                },
                "end": {
                    "line": 961,
                    "character": 56
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 962,
                    "character": 27
                },
                "end": {
                    "line": 962,
                    "character": 57
                }
            },
            "message": "E221 multiple spaces before operator",
            "code": "E221",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 968,
                    "character": 120
                },
                "end": {
                    "line": 968,
                    "character": 131
                }
            },
            "message": "E501 line too long (130 > 120 characters)",
            "code": "E501",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 976,
                    "character": 120
                },
                "end": {
                    "line": 976,
                    "character": 142
                }
            },
            "message": "E501 line too long (141 > 120 characters)",
            "code": "E501",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 1001,
                    "character": 74
                },
                "end": {
                    "line": 1001,
                    "character": 84
                }
            },
            "message": "E712 comparison to False should be 'if cond is False:' or 'if not cond:'",
            "code": "E712",
            "severity": 2
        },
        {
            "source": "pycodestyle",
            "range": {
                "start": {
                    "line": 1016,
                    "character": 20
                },
                "end": {
                    "line": 1016,
                    "character": 88
                }
            },
            "message": "E116 unexpected indentation (comment)",
            "code": "E116",
            "severity": 2
        }
    ]
}
tomv564 commented 5 years ago

Had a quick look at this but it's hard to see which message caused the error, and this logging doesn't include the headers portion of the messages. Is pyls-mypy the only addition that causes the issue? Does the error appear before opening the file, after, or even after changing or saving?

teto commented 5 years ago

yeah it's determistic, as soon as I include pyls-mpy in the python environment, the server crash or coc.nvim refuses to talk with it. As soon as I remove pyls-mpy (plain pyls), it works fine. Coc.nvim is very easy to install if you want to give it a try (and significantly better than the others, especially on neovim master with floating windows), here is my config: https://github.com/teto/home/blob/master/config/nvim/coc-settings.json

teto commented 5 years ago

bump ? I miss mypy :'(

teto commented 5 years ago

sry I had not realized I was using an old pyls-mypy. With the new one, it still crashes but not systematically, in some more convoluted cases so I can actually use mypy most of the time :)