pappy23 / pnnet

Automatically exported from code.google.com/p/pnnet
0 stars 0 forks source link

TODO #15

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
1) check neuron/link deletions. Possible memory leaks with shared weights
2) fix segfaults on concurrent thread executing. Possible reason -
concurrent read-only access to Net attributes
3) segfaults when creating big valarray. It means that we can't use
valarrays :( They are all possibly broken(must be very small)
4) write algorithm to scale/crop images
5) write algorithm to get some part of image(with different
rotations/transformations) and process it independently. This is needed for
searching face with sliding window approach when we look for rotated face.
6) write PPM/PGM/PBM, BMP, PNG, TIFF parser. Rewrite JPEG reader.
7) fix save/load procedure. It is recursive and uses a lot of stack memory

Solutions:
1) don't delete neurons :)
2) needs fixing. We can copy const pointer to Net attributes to each thread
local storage or replace operator[] call with map::at()
3) don't use big valarrays
4,5,6) do it with imagemagick
7) increase stack size with sudo ulimit -s

Original issue reported on code.google.com by yashin.vladimir on 4 May 2009 at 2:44

GoogleCodeExporter commented 9 years ago
2) fixed
3) fixed. Problem wasn't at valarray

Original comment by yashin.vladimir on 4 May 2009 at 6:27

GoogleCodeExporter commented 9 years ago
update:
6) write own reader for PPM/PGM/PBM format(may be with gzipped/bzipped 
versions) to
omit dependency from libJPEG
8) introduce new type Image and a set of algorithms to manipulate Image. 
Although we
will need Image<->valarray<->TrainPattern conversion

Original comment by yashin.vladimir on 6 May 2009 at 7:27

GoogleCodeExporter commented 9 years ago
9) fix learning algorithm. It should take into count that bias weights might be
shared across neurons
10) new multithreading/MPI concept: new feedforard runner, thread localstorage

Original comment by yashin.vladimir on 8 May 2009 at 8:15

GoogleCodeExporter commented 9 years ago
9) done
10) dismissed

Original comment by yashin.vladimir on 9 May 2009 at 8:06

GoogleCodeExporter commented 9 years ago
8) only Image tyoe dealing with PPM/PGM reader. Nothing more. Let Imagemagick 
to do
tricks :)
11)Global Core code review. Fix memory leaks, fix encapsulation and information
hiding, use smart pointers, fix serialization, speed/lock improvements and so 
on.

Original comment by yashin.vladimir on 20 May 2009 at 7:43

GoogleCodeExporter commented 9 years ago
12) 23.05.2009 meeting questions:
-cross platform code: pros and cons
-refactoring(smart pointers, encapsulation, where shall we place NetworkModel 
code?,
include model)
-new task - convolutional network recognizer
-documentation, api stabilization
-IO: PPM/PGM reader, TrainData serialization
-difficulties(link duplication, shared weights(non automatic increment based on 
usage
count), locks, huge memory leaks, neuron construction(manual/factory), wave 
algorithm
limitations(recursive links, latancy), attributes(what should be an attribute 
and
what shouldn't), again, encapsulation and information hiding)

Original comment by yashin.vladimir on 22 May 2009 at 8:41

GoogleCodeExporter commented 9 years ago
Great meeting finished. Conclusions:
1) replace shared list<Link> with two distinct list<Link>-s and remove 
"direction"
from Link
2) use callback while adding Weight
Weight* Weight::get()
{
 usageCount++;
 return this;
}

3) no factories
4) add new Neuron types (Standart, RBF and so on). Neuron "sucks" data and 
passes it
to ActivationFunction (Float)
5) SE(ss()<<test<<5<<...) - Exception - brand new idea
6) читать CTest для юнит-тестирования
7) oprofile, valgrind - отловить тормоза и пр.
8) get rid of NativeAttributes
9) Для Сереги - помечать непонятные места 
как FIXME
10) Doxygen
11) Описание в отдельном доке(статьи)

Original comment by yashin.vladimir on 23 May 2009 at 11:34

GoogleCodeExporter commented 9 years ago
Migrated to Wiki. Issue marked as Closed

Original comment by yashin.vladimir on 24 May 2009 at 11:56