issues
search
JackKelly
/
neuralnilm_prototype
MIT License
51
stars
22
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
give the net two inputs: raw power and fdiff
#48
JackKelly
opened
9 years ago
0
Truncated back prop
#47
JackKelly
closed
9 years ago
1
Gradient clipping
#46
JackKelly
opened
9 years ago
0
When target is single appliance, validation data must include that appliance!
#45
JackKelly
closed
9 years ago
1
Automate running multiple models
#44
JackKelly
opened
9 years ago
0
Deep Latent Gaussian Models (DLGMs)
#43
JackKelly
opened
9 years ago
0
Fix NaNs during training on 5-appliances
#42
JackKelly
opened
9 years ago
0
Should I take the average of the training costs for plotting?
#41
JackKelly
closed
9 years ago
1
Validation data should use different activations
#40
JackKelly
opened
9 years ago
0
Automatic architecture search
#39
JackKelly
opened
9 years ago
0
Try quantising input data OR using linspace to init weights
#38
JackKelly
opened
9 years ago
0
More realistic data
#37
JackKelly
opened
9 years ago
1
Pipeline for processing inputs and outputs
#36
JackKelly
opened
9 years ago
0
Clockwork RNNs
#35
JackKelly
opened
9 years ago
0
Build awesome disag algo into a web service
#34
JackKelly
opened
9 years ago
0
train net on many appliances and then extract the ‘knowledge’ for a single appliance?
#33
JackKelly
opened
9 years ago
0
Dealing with missing samples and different sample rates
#32
JackKelly
opened
9 years ago
0
force the outputs to sum to no more than the aggregate power demand at each time slice
#31
JackKelly
opened
9 years ago
0
MultiModel disaggregation
#30
JackKelly
opened
9 years ago
0
Ensemble
#29
JackKelly
opened
9 years ago
0
Mixture density network
#28
JackKelly
opened
9 years ago
0
Use two networks: one outputs boolean for if the appliance is on or off
#27
JackKelly
opened
9 years ago
0
Pre train on unlabelled data (e.g from xively)
#26
JackKelly
opened
9 years ago
0
one network per appliance (multiple experts) vs one network for all classes of appliances
#25
JackKelly
opened
9 years ago
0
'Skip' connections
#24
JackKelly
opened
9 years ago
0
Output both power demand and classification
#23
JackKelly
opened
9 years ago
0
Try getting net to map just from fdiff to quantized fdiff
#22
JackKelly
opened
9 years ago
0
Comparisons for paper
#21
JackKelly
opened
9 years ago
0
curriculum training
#20
JackKelly
opened
9 years ago
0
Inputs and representation
#19
JackKelly
opened
9 years ago
0
Save and load learnt params to disk
#18
JackKelly
opened
9 years ago
0
More FF layers at the top of the network
#17
JackKelly
opened
9 years ago
0
NILM formulated as a sequence-to-sequence problem where the output is Fridge=[12:30-12:50, etc]
#16
JackKelly
opened
9 years ago
0
Visualise what each neuron is interested in
#15
JackKelly
opened
9 years ago
0
Regularization
#14
JackKelly
opened
9 years ago
0
Target representation
#13
JackKelly
opened
9 years ago
0
NILM Metrics
#12
JackKelly
opened
9 years ago
0
move Source and RealApplianceSource classes into NILMTK
#11
JackKelly
opened
9 years ago
0
logging
#10
JackKelly
opened
9 years ago
0
modify cost to severely penalise during training if the sum of the estimates is greater than the input
#9
JackKelly
opened
9 years ago
0
single-directional HSLSTM
#8
JackKelly
opened
9 years ago
0
Train from multiple datasets at different sample rates
#7
JackKelly
opened
9 years ago
0
Batch normalisation
#6
JackKelly
opened
9 years ago
0
Whole chapter could be on unsupervised training of auto encoders
#5
JackKelly
opened
9 years ago
0
Layer wise pre-training
#4
JackKelly
opened
9 years ago
0
1D convnet input
#3
JackKelly
opened
9 years ago
0
Centre input array when using pad_input
#2
JackKelly
closed
9 years ago
0
Check Source inits array to zeros in gen_data
#1
JackKelly
closed
9 years ago
0