threefoldtecharchive / jumpscale9_lib

Apache License 2.0
0 stars 1 forks source link

capacity tracking: error in update reality #72

Closed zaibon closed 5 years ago

zaibon commented 6 years ago
Out[15]: Traceback (most recent call last):
  File "/opt/code/github/zero-os/0-robot/zerorobot/task/task.py", line 79, in execute
    self._result = self._func()
  File "/opt/code/github/zero-os/0-robot/zerorobot/template/decorator.py", line 33, in f_retry
    return f(*args, **kwargs)
  File "/opt/code/github/zero-os/0-templates/templates/node/node.py", line 87, in _register
    self.node_sal.capacity.update_reality()
  File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/clients/zero_os/sal/Capacity/Capacity.py", line 105, in update_reality
    report = self.reality_report()
  File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/clients/zero_os/sal/Capacity/Capacity.py", line 61, in reality_report
    used_memory=self._node.client.info.mem()['used']
  File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/tools/capacity/reality_parser.py", line 22, in get_report
    storage = _parse_storage(disks, storage_pools)
  File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/tools/capacity/reality_parser.py", line 81, in _parse_storage
    size = sp.fsinfo['data']['used']
  File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/clients/zero_os/sal/StoragePool.py", line 174, in fsinfo
    raise ValueError("can't get fsinfo if storagepool is not mounted")
ValueError: can't get fsinfo if storagepool is not mounted
can't get fsinfo if storagepool is not mounted
maxux commented 5 years ago

I cannot reproduce :/

Pishoy commented 5 years ago

i got the same error when updated a node with an image has a different farm name than old one

In [8]: ncl = j.clients.zos.get('node', data = { 'host':'10.102.44.252', 'password_':jwt},)

In [9]: ncl.ping()
Out[9]: 'PONG Version: master @Revision: 584ba6fe3b2891eb62c233f0c606315c97595ff9'

In [10]: ncl
Out[10]: Node <10.102.44.252:6379>

In [12]: zrobot = ncl.containers.get('zrobot')
In [14]:  zrobottoken = zrobot.client.bash('zrobot godtoken get').get()

In [15]:  string = zrobottoken.stdout.split(":")[1]

In [16]:   token = string.strip()

In [21]: nodeip = ncl.public_addr
In [22]:  url = 'http://{}:6600'.format(nodeip)
In [25]:     j.clients.zrobot.new(ncl.name, data={'url': url, 'god_token_': token})
In [26]: node_robot = j.clients.zrobot.robots.get(ncl.name)

In [32]: service = node_robot.services.get(name='_node_capacity')
In [38]: service.task_list.list_tasks(True)[0]
Out[38]: _reality
In [39]: t = service.task_list.list_tasks(True)[0]
In [40]: t.state

Out[40]: 'error'
In [41]:  print (t.eco.trace)
Traceback (most recent call last):
  File "/opt/code/github/threefoldtech/0-robot/zerorobot/task/task.py", line 81, in execute
    self._result = self._execute_greenlet.get(block=True, timeout=None)
  File "src/gevent/greenlet.py", line 709, in gevent._greenlet.Greenlet.get
  File "src/gevent/greenlet.py", line 317, in gevent._greenlet.Greenlet._raise_exception
  File "/usr/local/lib/python3.5/dist-packages/gevent/_compat.py", line 47, in reraise
    raise value.with_traceback(tb)
  File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run
  File "/opt/code/github/threefoldtech/0-robot/zerorobot/template/decorator.py", line 64, in f_timeout
    return gl.get(block=True, timeout=seconds)
  File "src/gevent/greenlet.py", line 709, in gevent._greenlet.Greenlet.get
  File "src/gevent/greenlet.py", line 317, in gevent._greenlet.Greenlet._raise_exception
  File "/usr/local/lib/python3.5/dist-packages/gevent/_compat.py", line 47, in reraise
    raise value.with_traceback(tb)
  File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run
  File "/opt/code/github/threefoldtech/0-templates/templates/node_capacity/node_capacity.py", line 71, in _reality
    self._node_sal.capacity.update_reality()
  File "/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/sal_zos/capacity/Capacity.py", line 102, in update_reality
    report = self.reality_report()
  File "/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/sal_zos/capacity/Capacity.py", line 37, in reality_report
    used_memory=self._node.client.info.mem()['used']
  File "/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/tools/capacity/reality_parser.py", line 23, in get_report
    storage = _parse_storage(disks, storage_pools)
  File "/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/tools/capacity/reality_parser.py", line 79, in _parse_storage
    size = sp.fsinfo['data']['used']
  File "/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/sal_zos/storage/StoragePool.py", line 159, in fsinfo
    raise ValueError("can't get fsinfo if storagepool is not mounted")
ValueError: can't get fsinfo if storagepool is not mounted
In [42]: ncl.storagepools.list()
Out[42]: 
[StoragePool <zos-cache>,
 StoragePool <14a14d14-7f72-4874-87f1-2e1f7fefb016>,
 StoragePool <8358294c-7ec3-4164-b6b9-8a6b5432fd5b>,
 StoragePool <78012885-92ad-42ae-8724-c1891fa3aa86>,
 StoragePool <b399c60f-af00-446f-87a0-946016554d0f>,
 StoragePool <8016f882-69be-4a0a-a745-3ff1ce027c0d>,
 StoragePool <f33a0045-5a17-455d-b5f6-3d61b5f6ff30>,
 StoragePool <83f91791-ee0e-4951-8c89-6b7481d950c1>,
 StoragePool <631b06a8-4f02-4da6-b73a-28ccc0492ef1>,
 StoragePool <450e0c4b-a23b-4dd2-b75d-0a3b2fb62757>,
 StoragePool <38cf146a-e00d-4d4e-9af6-e3dd0ea47acd>,
 StoragePool <8db122ba-96e0-4aec-89d6-bc76d2c55ae0>,
 StoragePool <8766d129-9d7c-40d3-8204-1d8ceb5070eb>,
 StoragePool <097ea3e8-a8c2-4f73-9a50-c9072e70d9b6>,
 StoragePool <0d7da281-60c0-41b0-9c50-a12219e8b8c5>,
 StoragePool <a1cd37fe-535e-4124-a526-e7dddd8f423b>,
 StoragePool <7bcca7e0-b0ec-42c3-ae7e-be28815f7ac6>,
 StoragePool <f5ab0a50-1e52-4b68-80a9-fa301137fa38>,
 StoragePool <62038d99-1c3d-4e38-8a44-def8493b3f64>,
 StoragePool <1f65f74b-fae0-4558-afc0-7608bbdfbd56>,
 StoragePool <1b2d39c0-d050-4832-9fb4-bd8758c8366a>,
 StoragePool <77309207-fb12-4079-a938-1be152d7c171>,
 StoragePool <b5c051d8-d60c-4109-b0f4-a3230c2f317e>,
 StoragePool <e8bdf9ac-f951-4b11-83d2-2100c653aeb8>,
 StoragePool <2a92991e-3540-422c-920d-7cd26396fdb9>,
 StoragePool <a89bd1e3-5faa-4fd4-92a7-2f8c1f63c230>,
 StoragePool <fbb8bf6e-b2de-4165-8da8-1ac1563e47b1>,
 StoragePool <006dd713-1eed-44bc-a026-5debd3455cb2>,
 StoragePool <179db102-1031-465f-ba32-6bc81829b385>,
 StoragePool <d62c7552-4b49-4731-9b5a-04135dd06a0a>,
 StoragePool <01b4d837-d934-4fa7-b474-6a95e285e85e>,
 StoragePool <f3e3c021-6aa7-41cf-81ef-b252357f5983>,
 StoragePool <4cd7335f-eb91-48e2-975b-68a4e1c8b331>,
 StoragePool <134e005b-d81a-4976-ad68-cb728e24f9fa>,
 StoragePool <480a18be-0cc4-4adb-950b-0e35cf5a8533>,
 StoragePool <fce14f15-0ec9-4fb0-b1db-955dcfcb5d21>,
 StoragePool <22b284fb-ba91-4e11-b491-f524c7da67dc>,
 StoragePool <32af60b8-ae66-4a1f-9833-c012f66dd768>,
 StoragePool <bf5012a2-526d-4f07-a18d-835a8f7d25b0>,
 StoragePool <8d804051-74aa-442e-91b3-ed40dc247d58>,
 StoragePool <ee6331ea-73c9-4421-a037-b69b6d455f73>,
 StoragePool <ae0aa879-4abb-41be-a1ad-e02c4ac3f09d>,
 StoragePool <ced5cc6c-0da0-463e-a168-5669d729d270>,
 StoragePool <e5f540b9-847f-4cc5-9951-a7ecf4a697f3>,
 StoragePool <6a68923a-667d-4831-ae8e-0617e5d84b21>,
 StoragePool <8e08c6ca-4599-4384-ad4b-cf966242635a>,
 StoragePool <b193128f-c4e0-419f-b10a-6a1920abbe45>,
 StoragePool <d74b394d-eefa-4834-8f94-357a2b619452>,
 StoragePool <4126e27b-107c-455a-95d0-d23331cee5ee>,
 StoragePool <f4b028c0-d299-45e7-8a75-1ed49630c588>,
 StoragePool <9d596cc5-02f3-434c-a2d7-68be8289008e>,
 StoragePool <03d23d5a-689c-458a-8a79-18f4ed8d71dc>,
 StoragePool <be9976ca-9f44-4b25-b142-95b2b42eff47>,
 StoragePool <640ceb2b-aaac-4ba4-8794-ef8422bc116f>,
 StoragePool <7a52168e-dff9-42b0-8cd0-d59f978f301e>,
 StoragePool <d640e31b-39c1-4c7f-8876-a98f2a8bcda1>,
 StoragePool <55f0def7-e4a7-415b-b876-404219ab6ebc>,
 StoragePool <1f8b4686-110b-41c4-a77f-8bf634585d9d>,
 StoragePool <72b2a336-a759-43d6-b43c-5e535fc8e4ad>,
 StoragePool <62adb0fc-94c9-408d-9ca7-301c8874f450>]

In [43]: ncl
Out[43]: Node <10.102.44.252:6379>

In [44]: ncl.client.bash('df -kh').get()
Out[44]: 
STATE: 0 SUCCESS
STDOUT:
Filesystem                Size      Used Available Use% Mounted on
tmpfs                   512.0M    325.6M    186.4M  64% /
devtmpfs                 15.5G         0     15.5G   0% /dev
cgroup_root              15.5G         0     15.5G   0% /sys/fs/cgroup
/dev/sdaa1                1.7T    237.9M      1.7T   0% /mnt/storagepools/sp_zos-cache
/dev/sdaa1                1.7T    237.9M      1.7T   0% /var/cache
/dev/sdaa1                1.7T    237.9M      1.7T   0% /var/log
overlay                   1.7T    237.9M      1.7T   0% /mnt/containers/1
/dev/sdaa1                1.7T    237.9M      1.7T   0% /mnt/containers/1/opt/code/local/stdorg/config
/dev/sdaa1                1.7T    237.9M      1.7T   0% /mnt/containers/1/opt/var/data/zrobot/zrobot_data
/dev/sdaa1                1.7T    237.9M      1.7T   0% /mnt/containers/1/root/jumpscale/cfg
/dev/sdaa1                1.7T    237.9M      1.7T   0% /mnt/containers/1/root/.ssh
tmpfs                   512.0M    325.6M    186.4M  64% /mnt/containers/1/tmp/redis.sock
tmpfs                   512.0M    325.6M    186.4M  64% /mnt/containers/1/coreX

STDERR:

DATA:

In [45]: ncl.client.bash('lsblk').get()
Out[45]: 
STATE: 0 SUCCESS
STDOUT:
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda       8:0    0 10.9T  0 disk 
`-sda1    8:1    0 10.9T  0 part 
sdb       8:16   0 10.9T  0 disk 
`-sdb1    8:17   0 10.9T  0 part 
sdc       8:32   0 10.9T  0 disk 
`-sdc1    8:33   0 10.9T  0 part 
sdd       8:48   0 10.9T  0 disk 
`-sdd1    8:49   0 10.9T  0 part 
sde       8:64   0 10.9T  0 disk 
`-sde1    8:65   0 10.9T  0 part 
sdf       8:80   0 10.9T  0 disk 
`-sdf1    8:81   0 10.9T  0 part 
sdg       8:96   0 10.9T  0 disk 
`-sdg1    8:97   0 10.9T  0 part 
sdh       8:112  0 10.9T  0 disk 
`-sdh1    8:113  0 10.9T  0 part 
sdi       8:128  0 10.9T  0 disk 
`-sdi1    8:129  0 10.9T  0 part 
sdj       8:144  0 10.9T  0 disk 
`-sdj1    8:145  0 10.9T  0 part 
sdk       8:160  0 10.9T  0 disk 
`-sdk1    8:161  0 10.9T  0 part 
sdl       8:176  0 10.9T  0 disk 
`-sdl1    8:177  0 10.9T  0 part 
sdm       8:192  0 10.9T  0 disk 
`-sdm1    8:193  0 10.9T  0 part 
sdn       8:208  0 10.9T  0 disk 
`-sdn1    8:209  0 10.9T  0 part 
sdo       8:224  0 10.9T  0 disk 
`-sdo1    8:225  0 10.9T  0 part 
sdp       8:240  0 10.9T  0 disk 
`-sdp1    8:241  0 10.9T  0 part 
sdq      65:0    0 10.9T  0 disk 
`-sdq1   65:1    0 10.9T  0 part 
sdr      65:16   0 10.9T  0 disk 
`-sdr1   65:17   0 10.9T  0 part 
sds      65:32   0 10.9T  0 disk 
`-sds1   65:33   0 10.9T  0 part 
sdt      65:48   0 10.9T  0 disk 
`-sdt1   65:49   0 10.9T  0 part 
sdu      65:64   0 10.9T  0 disk 
`-sdu1   65:65   0 10.9T  0 part 
sdv      65:80   0 10.9T  0 disk 
`-sdv1   65:81   0 10.9T  0 part 
sdw      65:96   0 10.9T  0 disk 
`-sdw1   65:97   0 10.9T  0 part 
sdx      65:112  0 10.9T  0 disk 
`-sdx1   65:113  0 10.9T  0 part 
sdy      65:128  0 10.9T  0 disk 
`-sdy1   65:129  0 10.9T  0 part 
sdz      65:144  0 10.9T  0 disk 
`-sdz1   65:145  0 10.9T  0 part 
sdaa     65:160  0  1.8T  0 disk 
`-sdaa1  65:161  0  1.8T  0 part /mnt/storagepools/sp_zos-cache
sdab     65:176  0  1.8T  0 disk 
`-sdab1  65:177  0  1.8T  0 part 
sdac     65:192  0  1.8T  0 disk 
`-sdac1  65:193  0  1.8T  0 part 
sdad     65:208  0  1.8T  0 disk 
`-sdad1  65:209  0  1.8T  0 part 
sdae     65:224  0 10.9T  0 disk 
`-sdae1  65:225  0 10.9T  0 part 
sdaf     65:240  0 10.9T  0 disk 
`-sdaf1  65:241  0 10.9T  0 part 
sdag     66:0    0 10.9T  0 disk 
`-sdag1  66:1    0 10.9T  0 part 
sdah     66:16   0 10.9T  0 disk 
`-sdah1  66:17   0 10.9T  0 part 
sdai     66:32   0 10.9T  0 disk 
`-sdai1  66:33   0 10.9T  0 part 
sdaj     66:48   0 10.9T  0 disk 
`-sdaj1  66:49   0 10.9T  0 part 
sdak     66:64   0 10.9T  0 disk 
`-sdak1  66:65   0 10.9T  0 part 
sdal     66:80   0 10.9T  0 disk 
`-sdal1  66:81   0 10.9T  0 part 
sdam     66:96   0 10.9T  0 disk 
`-sdam1  66:97   0 10.9T  0 part 
sdan     66:112  0 10.9T  0 disk 
`-sdan1  66:113  0 10.9T  0 part 
sdao     66:128  0 10.9T  0 disk 
`-sdao1  66:129  0 10.9T  0 part 
sdap     66:144  0 10.9T  0 disk 
`-sdap1  66:145  0 10.9T  0 part 
sdaq     66:160  0 10.9T  0 disk 
`-sdaq1  66:161  0 10.9T  0 part 
sdar     66:176  0 10.9T  0 disk 
`-sdar1  66:177  0 10.9T  0 part 
sdas     66:192  0 10.9T  0 disk 
`-sdas1  66:193  0 10.9T  0 part 
sdat     66:208  0 10.9T  0 disk 
`-sdat1  66:209  0 10.9T  0 part 
sdau     66:224  0 10.9T  0 disk 
`-sdau1  66:225  0 10.9T  0 part 
sdav     66:240  0 10.9T  0 disk 
`-sdav1  66:241  0 10.9T  0 part 
sdaw     67:0    0 10.9T  0 disk 
`-sdaw1  67:1    0 10.9T  0 part 
sdax     67:16   0 10.9T  0 disk 
`-sdax1  67:17   0 10.9T  0 part 
sday     67:32   0 10.9T  0 disk 
`-sday1  67:33   0 10.9T  0 part 
sdaz     67:48   0 10.9T  0 disk 
`-sdaz1  67:49   0 10.9T  0 part 
sdba     67:64   0 10.9T  0 disk 
`-sdba1  67:65   0 10.9T  0 part 
sdbb     67:80   0 10.9T  0 disk 
`-sdbb1  67:81   0 10.9T  0 part 
sdbc     67:96   0 10.9T  0 disk 
`-sdbc1  67:97   0 10.9T  0 part 
sdbd     67:112  0 10.9T  0 disk 
`-sdbd1  67:113  0 10.9T  0 part 
sdbe     67:128  0 10.9T  0 disk 
`-sdbe1  67:129  0 10.9T  0 part 
sdbf     67:144  0 10.9T  0 disk 
`-sdbf1  67:145  0 10.9T  0 part 
sdbg     67:160  0 10.9T  0 disk 
`-sdbg1  67:161  0 10.9T  0 part 
sdbh     67:176  0 10.9T  0 disk 
`-sdbh1  67:177  0 10.9T  0 part 

STDERR:

DATA:

js is development_960

In [10]: ncl = j.clients.zos.get('dog', data = { 'host':'172.29.184.65', 'password_':jwt},)

In [11]: ncl.client.bash('lsblk').get() Out[11]: STATE: 0 SUCCESS STDOUT:

STDERR:

DATA:

In [12]: ncl.ping() Out[12]: 'PONG Version: master @Revision: 584ba6fe3b2891eb62c233f0c606315c97595ff9' In [13]: ncl.client.bash('df -kh').get() Out[13]: STATE: 0 SUCCESS STDOUT: Filesystem Size Used Available Use% Mounted on tmpfs 512.0M 434.9M 77.1M 85% / devtmpfs 94.4G 0 94.4G 0% /dev cgroup_root 94.4G 0 94.4G 0% /sys/fs/cgroup overlay 512.0M 434.9M 77.1M 85% /mnt/containers/1 tmpfs 512.0M 434.9M 77.1M 85% /mnt/containers/1/root/jumpscale/cfg tmpfs 512.0M 434.9M 77.1M 85% /mnt/containers/1/root/.ssh tmpfs 512.0M 434.9M 77.1M 85% /mnt/containers/1/tmp/redis.sock tmpfs 512.0M 434.9M 77.1M 85% /mnt/containers/1/opt/code/local/stdorg/config tmpfs 512.0M 434.9M 77.1M 85% /mnt/containers/1/opt/var/data/zrobot/zrobot_data tmpfs 512.0M 434.9M 77.1M 85% /mnt/containers/1/coreX

In [14]: ncl.kernel_args Out[14]: {'intel_iommu': 'on', 'kvm-intel.nested': '1', 'console': 'tty1', 'consoleblank': '0', 'earlyprintk': 'serial,ttyS1,115200n8', 'loglevel': '7', 'zerotier': 'c7c8172af1f387a6', 'organization': '"green', 'edge': '', 'cloud.canada.toronto"': '', 'support': '', 'farmer_id': 'eyJhbGciOiJFUzM4NCIsInR5cCI6IkpXVCJ9.eyJhenAiOiJ0aHJlZWZvbGQuZmFybWVycyIsImV4cCI6MTU1MzYyMDQ1MywiaXNzIjoiaXRzeW91b25saW5lIiwicmVmcmVzaF90b2tlbiI6IjJHc0VlS1E4N1p1SXVBa3BGcFdPM051a3RMa2UiLCJzY29wZSI6WyJ1c2VyOm1lbWJlcm9mOmdyZWVuIGVkZ2UgY2xvdWQuY2FuYWRhLnRvcm9udG8iXSwidXNlcm5hbWUiOiJtaW5fbm9sYW5fMiJ9.8WR1T_cASnT1Qippgf212oh3hO4EtdsIInkszBtG7RKiWa-xmDnURn0y-Lde8LvKI_fzV95W25SswcoARK2bko9j5sFkja6u2alNlHymBfLdymjl50mb27DKkwB6wFwX'}

below is error in _node_capacity service 

...: print (t.eco.trace) ...: ...: Traceback (most recent call last): File "/opt/code/github/threefoldtech/0-robot/zerorobot/template/state.py", line 79, in check state = self.get(category, tag) File "/opt/code/github/threefoldtech/0-robot/zerorobot/template/state.py", line 58, in get raise StateCategoryNotExistsError("category %s does not exist" % category) zerorobot.template.state.StateCategoryNotExistsError: category disks does not exist

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/opt/code/github/threefoldtech/0-robot/zerorobot/task/task.py", line 81, in execute self._result = self._execute_greenlet.get(block=True, timeout=None) File "src/gevent/greenlet.py", line 709, in gevent._greenlet.Greenlet.get File "src/gevent/greenlet.py", line 317, in gevent._greenlet.Greenlet._raise_exception File "/usr/local/lib/python3.5/dist-packages/gevent/_compat.py", line 47, in reraise raise value.with_traceback(tb) File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run File "/opt/code/github/threefoldtech/0-robot/zerorobot/template/decorator.py", line 64, in f_timeout return gl.get(block=True, timeout=seconds) File "src/gevent/greenlet.py", line 709, in gevent._greenlet.Greenlet.get File "src/gevent/greenlet.py", line 317, in gevent._greenlet.Greenlet._raise_exception File "/usr/local/lib/python3.5/dist-packages/gevent/_compat.py", line 47, in reraise raise value.with_traceback(tb) File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run File "/opt/code/github/threefoldtech/0-templates/templates/node_capacity/node_capacity.py", line 37, in _total node.state.check('disks', 'mounted', 'ok') File "/opt/code/github/threefoldtech/0-robot/zerorobot/template/state.py", line 86, in check raise StateCheckError(err_msg) zerorobot.template.state.StateCheckError: check for state disks:mounted:ok failed


capacity shows  as development shile it is aready updated to master version as below picture 
![node](https://user-images.githubusercontent.com/21987477/55291437-d9986880-53de-11e9-82fe-707e5e057786.png)
maxux commented 5 years ago

Issue's fix is described here: https://github.com/threefoldtech/0-core/pull/150 But there is un underlying issue, no disks are found, probably because of hardware failure, according to dmesg.

The client bug is fixed on last development build (anytime soon).

Pishoy commented 5 years ago

@maxux second section issue has been resolved by updated to new master version but still the first section issue exist , not resolved by new master update as below

  In [37]: ncl = j.clients.zos.get('node', data = { 'host':'10.102.44.252', 'password_':jwt},)

In [38]: ncl.name
Out[38]: 'd4c9efce8846'

In [39]: ncl.client.system('uptime').get()
Out[39]: 
STATE: 0 SUCCESS
STDOUT:
 10:42:45 up 27 min,  0 users,  load average: 2.07, 2.52, 2.22

STDERR:

DATA:

In [40]: ncl.zerodbs.prepare()
[Thu04 10:44] - __init__.py       :96  :calelib.sal_zos.zerodb - INFO     - create storage pool d6d6f917-021a-4c97-a408-048abdf012c0 on /dev/sdaa
---------------------------------------------------------------------------
ResultError                               Traceback (most recent call last)
/usr/local/bin/js_shell in <module>()
----> 1 ncl.zerodbs.prepare()

/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/sal_zos/zerodb/__init__.py in prepare(self)
     96             logger.info("create storage pool %s on %s", name, device)
     97             sp = self.node.storagepools.create(
---> 98                 name, device=device, metadata_profile='single', data_profile='single', overwrite=True)
     99             storagepools.append(sp)
    100 

/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/sal_zos/storage/StoragePool.py in create(self, name, device, metadata_profile, data_profile, overwrite)
     81         part = _prepare_device(self.node, device)
     82 
---> 83         self.client.btrfs.create(label, [part.devicename], metadata_profile, data_profile, overwrite=overwrite)
     84         pool = StoragePool(self.node, name, part.devicename)
     85         return pool

/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/clients/zero_os_protocol/BtrfsManager.py in create(self, label, devices, metadata_profile, data_profile, overwrite)
     66 
     67         self._create_chk.check(args)
---> 68         self._client.sync('btrfs.create', args)
     69 
     70     def device_add(self, mountpoint, *device):

/opt/code/github/threefoldtech/jumpscale_lib/JumpscaleLib/clients/zero_os_protocol/BaseClient.py in sync(self, command, arguments, tags, id)
    113             if not result.code:
    114                 result._code = 500
--> 115             raise ResultError(msg='%s' % result.data, code=result.code)
    116 
    117         return result

ResultError: "(ERROR): [btrfs-progs v4.19 \nSee http://btrfs.wiki.kernel.org for more information.\n\n ERROR: /dev/sdaa1 is mounted\n]"

In [41]: ncl.client.bash('df -kh').get()
Out[41]: 
STATE: 0 SUCCESS
STDOUT:
Filesystem                Size      Used Available Use% Mounted on
tmpfs                   512.0M    325.7M    186.3M  64% /
devtmpfs                 15.5G         0     15.5G   0% /dev
cgroup_root              15.5G         0     15.5G   0% /sys/fs/cgroup
/dev/sdaa1                1.7T    239.0M      1.7T   0% /mnt/storagepools/sp_zos-cache
/dev/sdaa1                1.7T    239.0M      1.7T   0% /var/cache
/dev/sdaa1                1.7T    239.0M      1.7T   0% /var/log
overlay                   1.7T    239.0M      1.7T   0% /mnt/containers/1
/dev/sdaa1                1.7T    239.0M      1.7T   0% /mnt/containers/1/opt/code/local/stdorg/config
/dev/sdaa1                1.7T    239.0M      1.7T   0% /mnt/containers/1/opt/var/data/zrobot/zrobot_data
/dev/sdaa1                1.7T    239.0M      1.7T   0% /mnt/containers/1/root/jumpscale/cfg
/dev/sdaa1                1.7T    239.0M      1.7T   0% /mnt/containers/1/root/.ssh
tmpfs                   512.0M    325.7M    186.3M  64% /mnt/containers/1/tmp/redis.sock
tmpfs                   512.0M    325.7M    186.3M  64% /mnt/containers/1/coreX

STDERR:

DATA:

In [42]: ncl.disks.list()
Out[42]: 
[Disk <sda>,
 Disk <sdb>,
 Disk <sdc>,
 Disk <sdd>,
 Disk <sde>,
 Disk <sdf>,
 Disk <sdg>,
 Disk <sdh>,
 Disk <sdi>,
 Disk <sdj>,
 Disk <sdk>,
 Disk <sdl>,
 Disk <sdm>,
 Disk <sdn>,
 Disk <sdo>,
 Disk <sdp>,
 Disk <sdq>,
 Disk <sdr>,
 Disk <sds>,
 Disk <sdt>,
 Disk <sdu>,
 Disk <sdv>,
 Disk <sdw>,
 Disk <sdx>,
 Disk <sdy>,
 Disk <sdz>,
 Disk <sdaa>,
 Disk <sdab>,
 Disk <sdac>,
 Disk <sdad>,
 Disk <sdae>,
 Disk <sdaf>,
 Disk <sdag>,
 Disk <sdah>,
 Disk <sdai>,
 Disk <sdaj>,
 Disk <sdak>,
 Disk <sdal>,
 Disk <sdam>,
 Disk <sdan>,
 Disk <sdao>,
 Disk <sdap>,
 Disk <sdaq>,
 Disk <sdar>,
 Disk <sdas>,
 Disk <sdat>,
 Disk <sdau>,
 Disk <sdav>,
 Disk <sdaw>,
 Disk <sdax>,
 Disk <sday>,
 Disk <sdaz>,
 Disk <sdba>,
 Disk <sdbb>,
 Disk <sdbc>,
 Disk <sdbd>,
 Disk <sdbe>,
 Disk <sdbf>,
 Disk <sdbg>,
 Disk <sdbh>]

In [46]: ncl.shell()            

# dmesg | grep -i error
[    0.455006] Error parsing PCC subspaces from PCCT
[   12.195584] ERST: Error Record Serialization Table (ERST) support is initialized.
[   12.215907] ghes_edac: This EDAC driver relies on BIOS to enumerate memory and get error reports.
[   12.437889] RAS: Correctable Errors collector initialized.
maxux commented 5 years ago

The 500 error is just because the farmer id provided doesn't exists on the capacity website.

Pishoy commented 5 years ago

change to js development and run ncl.zerodbs.prepare() solve above issue