hughperkins / tf-coriander

OpenCL 1.2 implementation for Tensorflow
Apache License 2.0
791 stars 90 forks source link

Explicit Device Specification doesn't work? #37

Closed ghost closed 7 years ago

ghost commented 7 years ago

So, just decided to pull https://github.com/hughperkins/TensorFlow-Examples and run a few of the examples, to see how things are going since the fix to #34 and the addition of working ADAM.

The examples that specify a device always crash. Here's an example for 3_NeuralNetworks/dynamic_rnn.py:

cathal@thinkum:~/TensorFlow-Examples/examples/3_NeuralNetworks$ python3 dynamic_rnn.py 
/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/ops/gradients.py:90: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
OpenCL platform: AMD Accelerated Parallel Processing
OpenCL device: Hawaii
I tensorflow/core/common_runtime/gpu/gpu_device.cc:989] Found device 0 with properties: 
name: Hawaii
major: -1 minor: -1 memoryClockRate (GHz) 1040
pciBusID 0000.0000
Total memory: 7.57GiB
Free memory: 3.95GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:877] cannot enable peer access from device ordinal 0 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1011] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1021] 0:   N 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1083] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Hawaii, pci bus id: 0000.0000)
cl_driver DeviceAllocate 3930062848
Traceback (most recent call last):
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 972, in _do_call
    return fn(*args)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 950, in _run_fn
    self._extend_graph()
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 999, in _extend_graph
    self._session, graph_def.SerializeToString(), status)
  File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__
    next(self.gen)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'split': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices: 
Switch: GPU CPU 
Split: CPU 
     [[Node: split = Split[T=DT_FLOAT, num_split=20, _device="/device:GPU:0"](split/split_dim, Reshape)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "dynamic_rnn.py", line 170, in <module>
    sess.run(init)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 717, in run
    run_metadata_ptr)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 915, in _run
    feed_dict_string, options, run_metadata)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 965, in _do_run
    target_list, options, run_metadata)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 985, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'split': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices: 
Switch: GPU CPU 
Split: CPU 
     [[Node: split = Split[T=DT_FLOAT, num_split=20, _device="/device:GPU:0"](split/split_dim, Reshape)]]

Caused by op 'split', defined at:
  File "dynamic_rnn.py", line 155, in <module>
    pred = dynamicRNN(x, seqlen, weights, biases)
  File "dynamic_rnn.py", line 123, in dynamicRNN
    x = tf.split(0, seq_max_len, x)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1036, in split
    name=name)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2621, in _split
    num_split=num_split, name=name)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 748, in apply_op
    op_def=op_def)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2388, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1300, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Cannot assign a device to node 'split': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices: 
Switch: GPU CPU 
Split: CPU 
     [[Node: split = Split[T=DT_FLOAT, num_split=20, _device="/device:GPU:0"](split/split_dim, Reshape)]]

If I change the specification to :1 instead of :0 (because ???) I get this instead:

cathal@thinkum:~/TensorFlow-Examples/examples/3_NeuralNetworks$ python3 dynamic_rnn.py 
/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/ops/gradients.py:90: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
OpenCL platform: AMD Accelerated Parallel Processing
OpenCL device: Hawaii
I tensorflow/core/common_runtime/gpu/gpu_device.cc:989] Found device 0 with properties: 
name: Hawaii
major: -1 minor: -1 memoryClockRate (GHz) 1040
pciBusID 0000.0000
Total memory: 7.57GiB
Free memory: 3.95GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:877] cannot enable peer access from device ordinal 0 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1011] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1021] 0:   N 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1083] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Hawaii, pci bus id: 0000.0000)
cl_driver DeviceAllocate 3930062848
Traceback (most recent call last):
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 972, in _do_call
    return fn(*args)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 950, in _run_fn
    self._extend_graph()
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 999, in _extend_graph
    self._session, graph_def.SerializeToString(), status)
  File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__
    next(self.gen)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'GradientDescent/learning_rate': Could not satisfy explicit device specification '/device:GPU:1' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0, /job:localhost/replica:0/task:0/gpu:0
     [[Node: GradientDescent/learning_rate = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [] values: 0.01>, _device="/device:GPU:1"]()]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "dynamic_rnn.py", line 170, in <module>
    sess.run(init)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 717, in run
    run_metadata_ptr)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 915, in _run
    feed_dict_string, options, run_metadata)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 965, in _do_run
    target_list, options, run_metadata)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 985, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'GradientDescent/learning_rate': Could not satisfy explicit device specification '/device:GPU:1' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0, /job:localhost/replica:0/task:0/gpu:0
     [[Node: GradientDescent/learning_rate = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [] values: 0.01>, _device="/device:GPU:1"]()]]

Caused by op 'GradientDescent/learning_rate', defined at:
  File "dynamic_rnn.py", line 159, in <module>
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 198, in minimize
    name=name)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 314, in apply_gradients
    self._prepare()
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/training/gradient_descent.py", line 62, in _prepare
    name="learning_rate")
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 657, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 180, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 167, in constant
    attrs={"value": tensor_value, "dtype": dtype_value}, name=name).outputs[0]
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2388, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/cathal/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1300, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Cannot assign a device to node 'GradientDescent/learning_rate': Could not satisfy explicit device specification '/device:GPU:1' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0, /job:localhost/replica:0/task:0/gpu:0
     [[Node: GradientDescent/learning_rate = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [] values: 0.01>, _device="/device:GPU:1"]()]]

My clinfo output:

cathal@thinkum:~/TensorFlow-Examples/examples/3_NeuralNetworks$ clinfo
Number of platforms                               1
  Platform Name                                   AMD Accelerated Parallel Processing
  Platform Vendor                                 Advanced Micro Devices, Inc.
  Platform Version                                OpenCL 2.0 AMD-APP (2348.3)
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd cl_amd_event_callback cl_amd_offline_devices 
  Platform Extensions function suffix             AMD

  Platform Name                                   AMD Accelerated Parallel Processing
Number of devices                                 2
  Device Name                                     Hawaii
  Device Vendor                                   Advanced Micro Devices, Inc.
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 1.2 AMD-APP (2348.3)
  Driver Version                                  2348.3
  Device OpenCL C Version                         OpenCL C 1.2 
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Device Board Name (AMD)                         AMD Radeon (TM) R9 390 Series
  Device Topology (AMD)                           PCI-E, 01:00.0
  Max compute units                               40
  SIMD per compute unit (AMD)                     4
  SIMD width (AMD)                                16
  SIMD instruction width (AMD)                    1
  Max clock frequency                             1040MHz
  Graphics IP (AMD)                               7.2
  Device Partition                                (core)
    Max number of sub-devices                     40
    Supported partition types                     none specified
  Max work item dimensions                        3
  Max work item sizes                             256x256x256
  Max work group size                             256
  Preferred work group size multiple              64
  Wavefront width (AMD)                           64
  Preferred / native vector sizes                 
    char                                                 4 / 4       
    short                                                2 / 2       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 1 / 1        (n/a)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Address bits                                    64, Little-Endian
  Global memory size                              8131137536 (7.573GiB)
  Global free memory (AMD)                        7920852 (7.554GiB)
  Global memory channels (AMD)                    16
  Global memory banks per channel (AMD)           16
  Global memory bank width (AMD)                  256 bytes
  Error Correction support                        No
  Max memory allocation                           4244635648 (3.953GiB)
  Unified memory for Host and Device              No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       2048 bits (256 bytes)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        16384
  Global Memory cache line                        64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             16
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 2048 images
    Base address alignment for 2D image buffers   256 bytes
    Pitch alignment for 2D image buffers          256 bytes
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             2048x2048x2048 pixels
    Max number of read image args                 128
    Max number of write image args                8
  Local memory type                               Local
  Local memory size                               32768 (32KiB)
  Local memory syze per CU (AMD)                  65536 (64KiB)
  Local memory banks (AMD)                        32
  Max constant buffer size                        4244635648 (3.953GiB)
  Max number of constant args                     8
  Max size of kernel argument                     1024
  Queue properties                                
    Out-of-order execution                        No
    Profiling                                     Yes
  Prefer user sync for interop                    Yes
  Profiling timer resolution                      1ns
  Profiling timer offset since Epoch (AMD)        1496133664638937392ns (Tue May 30 09:41:04 2017)
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Thread trace supported (AMD)                  Yes
    SPIR versions                                 1.2
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Device Extensions                               cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_image2d_from_buffer cl_khr_spir cl_khr_gl_event 

  Device Name                                     Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz
  Device Vendor                                   GenuineIntel
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 1.2 AMD-APP (2348.3)
  Driver Version                                  2348.3 (sse2,avx)
  Device OpenCL C Version                         OpenCL C 1.2 
  Device Type                                     CPU
  Device Profile                                  FULL_PROFILE
  Device Board Name (AMD)                         
  Device Topology (AMD)                           (n/a)
  Max compute units                               4
  Max clock frequency                             799MHz
  Device Partition                                (core, cl_ext_device_fission)
    Max number of sub-devices                     4
    Supported partition types                     equally, by counts, by affinity domain
    Supported affinity domains                    L3 cache, L2 cache, L1 cache, next partitionable
    Supported partition types (ext)               equally, by counts, by affinity domain
    Supported affinity domains (ext)              L3 cache, L2 cache, L1 cache, next fissionable
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x1024
  Max work group size                             1024
  Preferred work group size multiple              1
  Preferred / native vector sizes                 
    char                                                16 / 16      
    short                                                8 / 8       
    int                                                  4 / 4       
    long                                                 2 / 2       
    half                                                 4 / 4        (n/a)
    float                                                8 / 8       
    double                                               4 / 4        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Address bits                                    64, Little-Endian
  Global memory size                              16788918272 (15.64GiB)
  Error Correction support                        No
  Max memory allocation                           4197229568 (3.909GiB)
  Unified memory for Host and Device              Yes
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        32768
  Global Memory cache line                        64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             16
    Max size for 1D images from buffer            65536 pixels
    Max 1D or 2D image array size                 2048 images
    Max 2D image size                             8192x8192 pixels
    Max 3D image size                             2048x2048x2048 pixels
    Max number of read image args                 128
    Max number of write image args                64
  Local memory type                               Global
  Local memory size                               32768 (32KiB)
  Max constant buffer size                        65536 (64KiB)
  Max number of constant args                     8
  Max size of kernel argument                     4096 (4KiB)
  Queue properties                                
    Out-of-order execution                        No
    Profiling                                     Yes
  Prefer user sync for interop                    Yes
  Profiling timer resolution                      1ns
  Profiling timer offset since Epoch (AMD)        1496133664638937392ns (Tue May 30 09:41:04 2017)
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            Yes
    SPIR versions                                 1.2
  printf() buffer size                            65536 (64KiB)
  Built-in kernels                                
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Device Extensions                               cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_spir cl_khr_gl_event 

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform
  clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [AMD]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  No platform
ghost commented 7 years ago

Possibly related:

https://github.com/tensorflow/tensorflow/issues/2292

ghost commented 7 years ago

If I change session initialisation to with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:, then instead of the above error it just hangs quietly for a bit after cl_driver DeviceAllocate and then segfaults.

hughperkins commented 7 years ago

split is missing currently. It's on my list of things to do https://github.com/hughperkins/tf-coriander/issues/33

hughperkins commented 7 years ago

thoughts on relative priority of tf.split vs tf.random_normal?

ghost commented 7 years ago

You'd know significantly more than I on their relative importance, I'm just getting started at TF. :)

ghost commented 7 years ago

Probably RNG is more important, because it better-enables operations in the first place. My poor understanding is that Split is used for multi-device learning? Which is great for those with multi-GPU setups but irrelevant to possibly-most users.

I don't know if this is related to the question about Random being needed to seed networks, but the 3_NeuralNetworks/multilayer_perceptron.py example ends for me with Accuracy: 0.1135... that seems suspiciously bad for an example network. Could that be a quiet bug all on its own? If so I can open another issue..

ghost commented 7 years ago

Nevermind, I just grepped the files and I see they're all explicitly seeded from tf.random_normal, so if that's not working at all then it's clearly a priority to get things working again.

hughperkins commented 7 years ago

You can seed them using np.random.randn(3,4).astype(np.float32)

hughperkins commented 7 years ago

(obviously, update the shape as needed)

hughperkins commented 7 years ago

(like this: https://github.com/hughperkins/TensorFlow-Examples/blob/c7ba3f6f04d21675951509235c59953429e68656/examples/3_NeuralNetworks/multilayer_perceptron.py#L64 )

hughperkins commented 7 years ago

I'm going to handle random normal first, since:

In the meantime, I'm going to close this issue, 37, as a duplicate of 33 https://github.com/hughperkins/tf-coriander/issues/33 , ok?

ghost commented 7 years ago

Thanks! Sorry, my inexperience lead to me missing that this and Split were the same issue.

On 30 May 2017 12:50:32 GMT+01:00, Hugh Perkins notifications@github.com wrote:

I'm going to handle random normal first, since:

  • I already started on it
  • prefer consistency across platforms
  • need to fix random normal soonish anyway, and
  • Adam optimizer will give junky results without it

In the meantime, I'm going to close this issue, 37, as a duplicate of 33 https://github.com/hughperkins/tf-coriander/issues/33 , ok?

-- You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/hughperkins/tf-coriander/issues/37#issuecomment-304854605

-- Sent from my Android device with K-9 Mail. Please excuse my brevity.

hughperkins commented 7 years ago

No worries. Thank you very much for starting to investigate how much of tf-coriander is working for you, and what bits are missing. This is very helpful :-)