sonata-nfv / son-emu

Attention! Legacy! This repo will be replaced with https://github.com/containernet/vim-emu
https://github.com/containernet/vim-emu
Apache License 2.0
36 stars 32 forks source link

service endpoints deployed from dummy gatekeeper #157

Closed stevenvanrossem closed 8 years ago

stevenvanrossem commented 8 years ago

Currently the service endpoints are disregarded by the dummygatekeeper. If we want to send/receive test traffic in a deployed service, we need somehow to access these endpoints.

mpeuster commented 8 years ago

@stevenvanrossem one simple solution is to not use the service endpoints and start the traffic generators manually, like described in one of the emulator examples: https://github.com/sonata-nfv/son-emu/wiki/Example-3

This was my work assumption for the Y1 demo. It corresponds to your third point "dedicated docker containers", right?

stevenvanrossem commented 8 years ago

The example assumes that the endpoint are included as vnfs in the nsd. I was thinking of a way to deploy the ns:input or ns:output links from the nsd by the (dummy) gatekeeper. To keep the nsd be compatible with the SP, I don't think we should include special 'endpoint vnfs' in the nsd. Maybe have the dummy gatekeeper deploy a default endpoint vnf for each external interface of the service in the nsd field 'connection_points' ?

mpeuster commented 8 years ago

No. In the example, the endpoint containers are deployed by the user. (Step 4. and 6.) The NSD only contains VNF1.

I also think there shouldn't be special endpoint VNFs in the NSD. This is why we agreed at some point that we let the user deploy the endpoints by hand. E.g. the user wants to have iperf containers as endpoints.

Automating this with the dummy GK is fine by me. But we should make it configurable with a flag, like: AUTO_DEPLOY_ENDPOINTS = None|(name-of-docker-image-for-endpoint)

What do you think?

stevenvanrossem commented 8 years ago

Sorry I was not aware there was already an agreement to deploy the endpoints by hand. This offers the most flexibility, but means also more work for the user to deploy everything... (this means after the package is pushed and deployed, the user still needs to manually add and chain the endpoints correctly) I also think having the flag to automate it or not would be the right way.

mpeuster commented 8 years ago

The agreement basically was that endpoint VNFs won't be in the NSD. So doing it by hand was kind of the obvious shortcut to demo it. But, automating it would be the way to go 👍

jbonnet commented 8 years ago

@mpeuster @stevenvanrossem Please bear in mind that

  1. Developer might not be the End User;
  2. Some services might not need to start immediately after deployment. Therefore the 'lifecycle' section in the VNFD (and NSD also? Can't remember)...
mpeuster commented 8 years ago

@jbonnet True. We are just focusing on the emulator context here ... so user means: User of the emulator == developer of the network service who wants to debug/test its service :-)

There is currently no extensive lifecycle support in the emulator. It can only directly start the VNF containers and let them run. Not sure if it will support more complex lifecycles at some point in time.

jbonnet commented 8 years ago

@mpeuster ha! sorry, I keep forgetting about the emulator! :-(

stevenvanrossem commented 8 years ago

Now the dummy gatekeeper expects to have a unique vnfd for each vnf in the nsd. I would think that a vnfd can be re-used multiple times inside an nsd? (eg. the service scale up and deploys multiple instances of the same vnfd, or multiple endpoints, with the same vnfd, are deployed)

So in the dummy GK, this means that a vnf in the nsd is only referenced by its id and not by its name. (different vnf_id's can re-use the same vnf_name...) ok if I change it to this behavior?

jbonnet commented 8 years ago

@stevenvanrossem Excellent question! I don't think we've yet solved the multiple VNF in the same NS... have we? I mean, probably the NSD should mention the VNF only once, and then the multiplicity should be handled by the VNFFG? I don't even know if this is possible...

On the real GK, the VNFD id and the NSD id are generated only after the VNFD/NSD is stored successfully in the Catalogues. I don't if these ids you're mentioning are the same as these or not, but I'd prefer to keep the behaviour of the 'dummy' GK similar to the real one.

stevenvanrossem commented 8 years ago

I think it even in case of multiplicity, the 'network_functions' section of the nsd should mention all VNFs. If a VNF is re-used multiple times they get a unique vnf_id here but use the same vnf_name?

It then depends on how the gatekeeper processes the NSD: does it take the vnf_id found inside the NSD or the VNFD id to identify a deployed vnf in the service?

I think the VNFD id will be the same if a the same VNFD is re-used multiple times? so better rely on the vnf_id inside the NSD?

jbonnet commented 8 years ago

@stevenvanrossem Well, let's hear from @mbredel, our schema master, but let's go deeper now.

What gain do you foresee in mentioning the same VNF more than once in the NSD? The NSD is supposed to list the 'resources' the service needs (in this case, the VNFs). The Catalogue will hold only one VNF with the same triplet <vendor/name/version>. The only advantage I see in your suggestion is if that vnf_id is a generic string, e.g., 'firewall-1', that is later in the NSD used for referencing (but then 'vnf_id' is probably misleading, maybe 'vnf_refence' should be better?). In this case, you could repeat the VNF, giving it a different id/reference, e.g., 'firewall-2'.

But, again, I'm probably missing the point here...

stevenvanrossem commented 8 years ago

Indeed what you describe (different vnf_references to repeat the same VNF) would be the gain. Meaning that firewall-1, firewall-2,.. are different VNF instances deploying the same VNF image... The advantage would also be: -In the emulator, the endpoints of the service (input/output) can be deployed with the same generic docker container, so in the emulator, you get traffic to/from the service by accessing those endpoint containers. -at later stage (maybe idea for Y2...) we could think how scaling can be easily implemented using the sonata NSD, re-using the same VNF image multiple times can be an option then... eg. the SLM service plugin scales the service by modifying the NSD and includes more VNF instances of the same...

jbonnet commented 8 years ago

Well... The idea is the developer owns the NSD/VNFD, where scaling is already specified. No other entity (in your example, the SLM) should modify it.

Again, the NSD/VNFD should specify which resources are needed. If the service needs 10 instances of the same function you shouldn't need to explicitly and repeatedly state them in the NSD. That's boring and error prone. We must raise the level here.

And scaling a service maybe a complex issue, we should start by scaling a function.

mpeuster commented 8 years ago

@stevenvanrossem I am a bit lost regarding your question for the dummy gatekeeper. How do you plan to change its behavior? Maybe we can work with an real example, taken from: https://github.com/sonata-nfv/son-examples/blob/master/service-projects/sonata-empty-service-emu/sources/nsd/nsd.yml

There we have three VNFs referenced in the NSD:

network_functions:
  - vnf_id: "empty_vnf1"
    vnf_vendor: "eu.sonata-nfv"
    vnf_name: "empty-vnf1"
    vnf_version: "0.1"
  - vnf_id: "empty_vnf2"
    vnf_vendor: "eu.sonata-nfv"
    vnf_name: "empty-vnf2"
    vnf_version: "0.1"
  - vnf_id: "empty_vnf3"
    vnf_vendor: "eu.sonata-nfv"
    vnf_name: "empty-vnf3"
    vnf_version: "0.1"

What the dummy GK does right now is to identify each of them by the vnf_idstring, right? (At least this was the plan, I don't say that it doesn't have any bugs :-P)

This means if I want to use the same VNFD twice, I would do:

network_functions:
  - vnf_id: "empty_vnf1"
    vnf_vendor: "eu.sonata-nfv"
    vnf_name: "empty-vnf1"
    vnf_version: "0.1"
  - vnf_id: "empty_vnf2"
    vnf_vendor: "eu.sonata-nfv"
    vnf_name: "empty-vnf1"
    vnf_version: "0.1"

How would your plan look like?

stevenvanrossem commented 8 years ago

Ok, maybe scaling was not such a good example: Suppose the same VNF (firewall) is used at multiple inputs of the service (firewall@input1 and firewall@input2). Then the VNFD of the firewall is just copied for firewall@input1 and firewall@input2, and the NSD includes 2 references to 2 VNFDs which are just copies of each other? Isn't this also boring and error prone?

stevenvanrossem commented 8 years ago

Hi Manuel, Indeed, that is the plan, but this does not work right now, because vnf_name (which refers to the vnfd) is used as identifier for a deployed vnf, eg. placing algorithm assigns a dc to the vnf_name, in the code there is this vnf_name2docker_name dict which is not correct any more in this case, so it needs some polishing...

jbonnet commented 8 years ago

:-) @stevenvanrossem With two is not so boring, but with 10? And it is always error prone. Plus, you're hardcoding the service architecture, avoiding scaling. Amazon, e.g., already does scaling (specify a max number of EC2 instances you want) for years, I don't think we have a margin to implement less.

But we're arguing around a solution for a problem I haven't yet understood: what is exactly the problem we're trying to solve with this repetition of function specification?

Unless there's a concrete case in which the 2 (or more) repeated functions are needed but not for scaling... I wouldn't follow this path.

mpeuster commented 8 years ago

@stevenvanrossem Ah ok, yes that could be. This should really be changed to use the vnf_id for all internal identification. Go ahead with your changes. I am currently still working on the example services and demo preparation. Don't plan to touch the dummy GK today or Monday.

stevenvanrossem commented 8 years ago

@jbonnet interesting discussion, I see your point :-) The current examples we use in the emulator deploy such repeated functions as a test. Thats why I was thinking to change it...

But following your idea, then the NSD will always contain a single instance of a VNF, which is then scaled by another algorithm (tbd), so food for thought how that scaling algorithm would work, without changing the NSD.

jbonnet commented 8 years ago

@stevenvanrossem 👍 So this is a 'problem' restricted to testing the fake GK. About your question: there'll be a scaling dedicated plugin, as well as possible SSMs/FSMs specialised on scaling, in the SP side. Scaling a service might imply scaling the VNF(s), scaling the network inter-connecting them or connecting them to the world. Or moving/creating one or more VNFs (e.g., towards a PoP that's nearer the point where traffic enters/leaves the service).

Let's start by looking at VNF scaling. The VNFD already has an attribute stating how many max instances each VDU/VNFC might have. The scaling algorithm can use this information to create/inter-connect more VNFCs of the same VNF, therefore accepting more traffic. Depending on the specific VNF, a load-balancer might be added automatically or implemented internally by the VNF.

My 2cents...

stevenvanrossem commented 8 years ago

Propose to use the term SAP (Service Access Point, also used by ETSI) iso service endpoints. Now the dummygatekeeper can be deployed with a flag to start the SAPs as a specific docker container. So in the topology file you can use: SonataDummyGatekeeperEndpoint("0.0.0.0", 5000, deploy_sap=True)

the docker container that is used for the saps is defined in src/emuvim/api/sonata/sap_vnfd.yml now the image is: registry.sonata-nfv.eu:5000/son-emu-sap (Dockerfile pushed to son-examples)

Deploying the same vnfd for multiple vnf_ids is not yet implemented further, because no use-case at the moment.

see commit https://github.com/sonata-nfv/son-emu/commit/a48e9e685c8540ab08b5eca3f7c0c6cc1354dbd7