eclipse / kapua

Eclipse Public License 2.0
227 stars 160 forks source link

EntityManagerFactory(s) created by the persistence layer have be closed #183

Open stefanomorson opened 7 years ago

stefanomorson commented 7 years ago

The issue affects some classes, like for example some singletons, that have an application scope and hold resources that have to be released before closing the Kapua application. Currently these Kapua applications are the Console, the RESTful APIs, the Broker but the same can be applied to any micro service that assemble one or more Kapua services in JVM. The solution should be easy for the developer to apply and should reduce the need to inherit from base classes in order to keep the classes where these resources are managed as simple as possible. In the same time the code dedicated to the startup/shutdown management should be kept simple and compact without requiring a priori knowledge on the implementation of the single Kapua services.

stefanomorson commented 7 years ago

The idea is to introduce a new abstract class in the framework called KapuaApplication. The class has two simple methods:

The class has to be implemented by a custom Kapua application. It also have to be instantiated at the top level of the containing application. The containing application can be a JavaSE application or, for example, a servlet container. The protocol or a Java SE application should be the following:

class MyApplication extends KapuaApplication
{
    public static void main(String[] args)
    {
        MyApplication application = new MyApplication();
        application.start();

        // Do all the work between a start() and a stop()
        ...

        application.stop();
    }
}

the protocol for a servlet container could be the following:

public class MyApplicationServletListener implements ServletContextListener
{
   private MyApplication myApplication;
   private ServletContext servletContext;

   public MyApplicationServletListener()
   {
      myApplication = new MyApplication();
   }
   @Override
   public void contextInitialized(ServletContextEvent ctxEvent)
   {
      servletContext = ctxEvent.getServletContext();
      myApplication.start();
   }
   @Override
   public void contextDestroyed(ServletContextEvent ctxEvent)
   {
       myApplication.stop();
   }
}

Start() and stop() methods will trigger events that can be listened by resource managers. A resource management class can listen to events by implementing a LifecyleListener. Suppose there's a resource manager class:

public class ResourceManager
{
    private ResourceManager instance;

    private ManagedResource resource;

    static {
        instance = new ResourceManager();
    }
    private ResourceManager()
    {
        // The resource is initialized and needs to be closed (managedResource.close() 
        // before the application shuts down.
        managedResource = new ... ;
    }

    public static ResurceManager getInstance()
    {
        return instance;
    }
}

will be substituted by the following

public class ResourceManager
{
    private DisposableResource resource;

    @Inject public ResourceManager(ApplicationLifecycle appLifecycle)
    {
        final ResourceManager impl = this;

        // Manage the lifecycle
        appLifecycle.add(new LifecycleListener() {

            @Override
            public void onStart()
            {
            }

            @Override
            public void onStop()
            {
                // Manage resource release
                impl.resource.close();
            }
        });

        // The resource is initialized and needs to be closed (managedResource.close() 
        // before the application shuts down.
        managedResource = new ... ;
    }
}
riccardomodanese commented 7 years ago

+1

muros-ct commented 7 years ago

I will try to describe my understanding with an example of usage with Liquibase. You correct me if I am wrong. I would create LiquibaseResourceManager class that would get injected with ApplicationLifecycle object as part of consturctor. I would use appLifecycle object and add Lifecycle listener. In this listener onStart method would use Liquibase API to create database. onStop method would be empty, but in case of unit tests it might contain teardown sql to clean up database. Application such as console or REST interface would than in constructor create single ApplicationLifecycle object that would contain list of all listeners. And onStart methods of this listeners would be executed at application start and when application is closed all onStop methods of listeners would be execued.

Questions:

  1. Am I correct?
  2. What would be the order of listener execution and is order important?
  3. Where is ApplicationLifecycle object created and when are onStart methods executed?
  4. What if resources that are managed are interdependent of each other?
  5. What happens when you have multiple Applications (ex. console and REST) and both try to create database tables?
  6. Can applications run on multiple nodes?

Those are just my considerations. Otherwise I agree with this principle, as we discussed it on mondays meeting.

stefanomorson commented 7 years ago

@muros-ct

1 - Not necessarily. You can also use the schema management apis. For example in a test class:

public class MyServiceTestSteps extends KapuaTest
{
   private MyApplication myApplication;

   public MyServiceTestSteps()
   {
   }

   @Before
   public void beforeScenario(Scenario scenario) throws Exception
   {
      // Do schema create
      createSchema();

      myApplication = new MyApplication();
      myApplication.start();
   }

   @After
   public void afterScenario() throws Exception
   {
       myApplication.stop();

       // Do schema drop
       dropSchema();
   }
}

or you could use the Lifecycle listener approach. To make it easier, maybe the KapuaApplication class could expose a register method:

public class MyServiceTestSteps extends KapuaTest
{
   private MyApplication myApplication;

   public MyServiceTestSteps()
   {
   }

   @Before
   public void beforeScenario(Scenario scenario) throws Exception
   {
      myApplication = new MyApplication();
      myApplication.register( new LifecycleListener() {

         @Override
         public void onStart() {
            // Do schema create
            createSchema();
         }

         @Override
         public void onStop {
            // Do schema drop
            dropSchema();
         }      
      });

      myApplication.start();
   }

   @After
   public void afterScenario() throws Exception
   {
       myApplication.stop();
   }
}

2 - The order of execution will be determined by the registration sequence even though other approaches can be considered. 3 - ApplicationLifecycle will live inside the KapuaApplication or, better to say, inside the locator. 4 - For the use cases that I defined a the beginning this shouldn't be frequent. Can you provide an example ? 5 - The solution proposed in this issue is not meant to solve or manage the db schema management problem. However, for the scenario mentioned, I think we should not let the applications create the schema themselves. We should provide some tooling that is capable to produce the whole schema from the single single parts defined by each service. The way the schema is created depends also from the type of deployment and the needs of the final user. 6 - Yes (from point 5).