Open shonabell opened 5 years ago
I think the best way to approach this is to provide context
as a new Router()
instead of the entire app. From there, you can mount that router to the application with a prefix -- copying from your link, it might something like this:
const app = Express()
app.use(BodyParser.urlencoded({ extended: true }))
app.use(BodyParser.json())
const config = {
adapter: require('seneca-web-adapter-express'),
// auth: Passport,
context: new Express.Router(),
options: { parseBody: false },
routes: Routes
}
const seneca = Seneca()
// define plugins
seneca.use(AdminPromisePlugin)
seneca.use(AdminPlugin)
seneca.use(Web, config)
seneca.use('mesh')
seneca.ready(() => {
app.use('/some-prefix', seneca.export('web/context')())
app.listen(process.env.PORT, (err) => {
console.log(err || `server started on: ${process.env.PORT}`)
})
})
Although I'm not quite clear if that's what you were asking. You can also define prefix
at the root of the route configuration, so if you want a specific microservice to be mounted under a given prefix, that is another way to accomplish that.
--
Here's some guidelines on how we're using seneca-web at my work:
The way we use seneca-web is with a (slightly) modified redis
transport as this is a really pubsub mechanism. If you have more than one "host" servers running, or more than one of each microservice running and are using a point-to-point consume transport mechanism, like http/tcp/amqp, not all the routes from microservices will be loaded on all the hosts.
Each of the "host" microservices (i.e. express application) will add a redis listener for role:web,routes:*
overwriting the seneca-web one. If they receive one, check an internal cache of whether they are aware of the microservice and if not, call into the prior action which hits seneca-web.
When a "host" microservice comes online, it will ask all the microservices (again via redis pubsub) to fire off all their routes -- this has the effect of sending hosts that are already online the routes again -- so we also send a "for_host" param and if it doesn't match they do nothing.
When each of the "client" microservices come online, they fire of a role:web,routes:*
message via the redis transport in their seneca.ready()
block -- this will include routes they have defined as well as additional information such as hostName
of the container firing the service, and a unique name for the service.
The end result is that a new microservice coming online will send it's routes to all the hosts, and when a new host comes online will receive routes from each of the microservices. We have a common node_module
that handles all of this for us; one seneca plugins for client, the other for host.
With mesh, a pubsub mechanism is model:observe
. We had some difficulty with seneca-mesh
before going live and ended up switching our for seneca-amqp-transport
for point-to-point communication, and seneca-redis-transport
for pubsub type communication.
I may write this out with code examples and put it in a cook-book type document. It took a while to come up with a mechanism that worked properly for us -- but this method is pretty reliable.
The only case it falls down is when there are new routes for a microservice -- each of the hosts will have a cache entry saying they already know about it and ignore any new mounting requests. To get around this we have an administrative mechanism that clears the cache & repopulates everything. Not the cleanest, but it works -- we'll usually run it after a major deployment to ensure we don't get any un-mounted routes.
Hi, thanks for your response.
What you mention in the second approach is closer to what I'm trying to do. The goal is to not have to define routes in the "api-gateway" but have the routes defined in the plugins so that I don't have to update the api gateway every time a new service is added. It seems that the functionality is not supported by default though.
Well, seneca-web itself is pretty bare-bones, it adds a "role:web,routes:*" action and assumes that microservices are going to call into it to provide additional routes.
That could be the API gateway itself populating itself, or it could be other microservices that call into it. You need to provide a transport mechanism if calling from other services -- typically some kind of pubsub fire/forget works best if you have multiple hosts, or multiple microservices.
If you're using seneca-mesh
, model:observe
should well for this use case. Full disclosure -- we tried using mesh initially at work but it really don't end well for us, so we went to amqp/redis for point->point and pubsub respectively.... I've created a new package pulling out what we did to get seneca-web to work in a distributed manner: https://github.com/tswaters/seneca-web-helper
I tried out your package. I left a comment on that repo. I had this currently though, am I going about it the wrong way?
//server.js
'use strict'
let Seneca = require('seneca');
let SenecaWeb = require('seneca-web');
let Express = require('express');
let BodyParser = require('body-parser');
const app = Express()
.use(BodyParser.urlencoded({extended: true}))
const senecaWebConfig = {
context: app,
adapter: require('seneca-web-adapter-express'),
routes: []
};
let seneca = Seneca()
.use(SenecaWeb, senecaWebConfig)
.use("mesh",{
isbase: true,
listen: [{
pin: 'role:user'
}]
})
.ready(() => {
let server = seneca.export('web/context')()
server.listen(5555, (err) => {
console.log(err || 'server started on: 5555')
});
console.log(seneca.list())
});
//microservice-1.js
const Seneca = require('seneca');
Seneca({tag: 'user'})
.use('web')
.use(function user(options) {
this.add('role:user,cmd:load', function (msg, reply) {
var user = {
id: 1,
first_name: 'u1',
last_name: 'u1',
};
reply(user);
});
this.add('role:user,cmd:list', function (msg, reply) {
var user = [{
id: 1,
first_name: 'u1',
last_name: 'u1',
}, {
id: 2,
first_name: 'u2',
last_name: 'u2',
}, {
id: 3,
first_name: 'u3',
last_name: 'u3',
}];
reply(user);
});
this.add('init:user', (msg, reply) => {
this.act('role:web', { routes: [{
prefix: '/user',
pin: 'role:user,cmd:*',
map: {
load: {
GET: true,
name: '',
suffix: '/:id',
// auth: {
// strategy: 'jwt'
// }
},
list: {
GET: true,
name: '',
}
}
}]
});
}
);
return 'user';
})
.use('mesh', {
pin: 'role:user'
})
.listen({port: 9900, pin: 'role:user'})
.ready(() => {
})
You're going to need to add role:web,routes:*
with model: observe
on the api service so that each microservice can call back to the gateway api with their routes. I can't recall off-hand how mesh deals with setting up client
s -- but I'm pretty sure it's auto-magical -- once a microservice comes online with the mesh, mesh will let the microservice know what actions the mesh supports and add the appropriate clients.
You might also have difficulty in calling into role:web
from the init function of a plugin, as the mesh will not have been "finalized" in time for this... I'd move the act('role:web', routes:[])
call to a seneca.ready
callback.... and, if you have difficulty with that -- last I had used mesh, it may mark itself as "ready" prior to actually being ready -- maybe wrap the call with a liberal setTimeout
(like 500ms should be enough, but play with it and see).
As you scale horizontally, adding more services to the mesh, it'll take longer to "finalize" so that each service has the clients required to make the calls it needs -- you'll need to play with it. This is the difficulty of calling into actions during initialization.
Is it possible to setup the prefix and pin to act on a microservice (plugin) with it's route defined in the service? I don't want to define the routes in the gateway, just the prefix.
If it's not would setting up seneca-web for every plugin then using something like http-node-proxy to proxy to the correct service be a decent way to do this?
For example in the code below the route is defined in the file. https://github.com/UrosNikolic/seneca-microservice-boilerplate/blob/master/api-gateway/api-gateway.js