EconForge / Dolo.jl

Economic modeling in Julia
Other
58 stars 29 forks source link

ENH: special object for decision rules. #126

Closed albop closed 2 years ago

albop commented 6 years ago

@sglyon : here is an attempt to resolve issue #125 . One can do:

drs = DRStore([rand(32) for i=1:3])
drs.data :: Tuple{Vector{Float64}}
drs.flat :: Vector{Float64}
maximum(abs, cat(1, drs.data...)- drs.flat) # -> 0
drs.flat[10] = 1.2
maximum(abs, cat(1, drs.data...)- drs.flat) # -> 0

The fields drs.data can contain the decision rule as we store it right now (almost, it is a tuple instead of a vector so that it is immutable) and drs.flat represents it in all states as one single vector. Trick is both fields refer to the same part of memory and are always synchronized. I believe there is no memory cleaning problem but that would require careful thinking.

I have hesitated as to how drs itself should behave. In the end, I implemented the behaviour from before (so drs[1] returns a vector) but didn't use it. The reason is to backward compatibility, but I ended up not using drs directly, using instead drs.data most of the time. The alternative would drs to behave as drs.flat, at which case it could derive from AbstractVector. I can see how we would get some operations for free, but haven't found one very good reason. An idea would be to do drs[i] for flat indexing and drs[G(i)] or smthg like that for group indexing.

So far I have only applied it to time_iteration_direct (memory 350->20) and would like some early feedback before generalizing it. Any thought ?

albop commented 6 years ago

Thanks for the comments. So, this seems like the right direction. I'll make two changes:

sglyon commented 6 years ago

What if we did drs[i, :] for drs.grouped[i] (or the colon in the first position -- whichever feels more natural to you)

albop commented 6 years ago

That would be a bit like a non square 2d array. Sounds very good to me !

On Thu, 21 Dec 2017, 14:21 Spencer Lyon, notifications@github.com wrote:

What if we did drs[i, :] for drs.grouped[i] (or the colon in the first position -- whichever feels more natural to you)

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/EconForge/Dolo.jl/pull/126#issuecomment-353362711, or mute the thread https://github.com/notifications/unsubscribe-auth/AAQ5KVvx5JYIJHG33uZjBqqlZm3rbnQjks5tCmmGgaJpZM4RI1m_ .

albop commented 6 years ago

As for orientation, dolo notations are (i,n) where i is exogenous index and n endogenous one so your orientation makes sense to me. It's not classical Julia ordering but I can live with that.

On Thu, 21 Dec 2017, 15:52 Pablo Winant, pablo.winant@gmail.com wrote:

That would be a bit like a non square 2d array. Sounds very good to me !

On Thu, 21 Dec 2017, 14:21 Spencer Lyon, notifications@github.com wrote:

What if we did drs[i, :] for drs.grouped[i] (or the colon in the first position -- whichever feels more natural to you)

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/EconForge/Dolo.jl/pull/126#issuecomment-353362711, or mute the thread https://github.com/notifications/unsubscribe-auth/AAQ5KVvx5JYIJHG33uZjBqqlZm3rbnQjks5tCmmGgaJpZM4RI1m_ .

sglyon commented 6 years ago

Ok sounds great!

albop commented 6 years ago

Quick question : what is a nice simple way to do a[:] += b[:] without allocating ?

sglyon commented 6 years ago

I believe that a .+= b would work

You can always use the trusty AXPY Blas routine:

julia> a = [1, 2, 3.0]
b =3-element Array{Float64,1}:
  1.0
 2.0
 3.0

julia> b = [10, 10, 10.0]
3-element Array{Float64,1}:
 10.0
 10.0
 10.0

julia> Base.axpy!(1, b, a)
3-element Array{Float64,1}:
 11.0
 12.0
 13.0

julia> a
3-element Array{Float64,1}:
 11.0
 12.0
 13.0

It looks like the broadcasting version is faster:

julia> function f1(a, b)
       a .+= b
       end
f1 (generic function with 1 method)

julia> function f2(a, b)
       Base.axpy!(1, b, a)
       end
f2 (generic function with 1 method)

julia> a = rand(1000);

julia> b = rand(1000);

julia> f1(a, b); @allocated f1(a, b)
96

julia> f2(a, b); @allocated f2(a, b)
0

julia> using BenchmarkTools

julia> @benchmark f1(a, b)
BenchmarkTools.Trial:
  memory estimate:  96 bytes
  allocs estimate:  4
  --------------
  minimum time:     341.165 ns (0.00% GC)
  median time:      353.771 ns (0.00% GC)
  mean time:        394.723 ns (2.26% GC)
  maximum time:     13.852 μs (94.29% GC)
  --------------
  samples:          10000
  evals/sample:     218

julia> @benchmark f2(a, b)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     1.381 μs (0.00% GC)
  median time:      1.403 μs (0.00% GC)
  mean time:        1.541 μs (0.00% GC)
  maximum time:     74.430 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     10
albop commented 6 years ago

Are you sure about the former ? Any ref ?

On Thu, 21 Dec 2017, 22:13 Spencer Lyon, notifications@github.com wrote:

I believe that a .+= b would work

You can always use the trusty AXPY Blas routine:

julia> a = [1, 2, 3.0] b =3-element Array{Float64,1}: 1.0 2.0 3.0

julia> b = [10, 10, 10.0]3-element Array{Float64,1}: 10.0 10.0 10.0

julia> Base.axpy!(1, b, a)3-element Array{Float64,1}: 11.0 12.0 13.0

julia> a3-element Array{Float64,1}: 11.0 12.0 13.0

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/EconForge/Dolo.jl/pull/126#issuecomment-353459099, or mute the thread https://github.com/notifications/unsubscribe-auth/AAQ5Ka-VqvbMrCa5Mk3sWlH3aPn2VkQ7ks5tCsnvgaJpZM4RI1m_ .

sglyon commented 6 years ago

Yeah I’m sure about it.

a .+= b is syntactic sugar for broadcast!(+, a, a, b) , which says (in ordrer arguments appear in the function call) broadcast the + function store the result in a and apply + to arguments a and b

julia> expand(:(a .+= b))
:((Base.broadcast!)(+, a, a, b))
albop commented 4 years ago

Getting back to it, not understanding much. Looks like my former self was a better version of me.

albop commented 2 years ago

This is now redundant, as we are using (hopefully fast) views.