clear-datacenter / plan

MIT License
45 stars 17 forks source link

智能诊断 #9

Open wanghaisheng opened 8 years ago

wanghaisheng commented 8 years ago

Enlitic

肺癌结节 骨微裂 肺癌结节 测试数据集LIDC dataset 及更多 lung cancer nodules in chest CT images

Enlitic's technology is already used to help radiologists detect and diagnose the early signs of lung cancer and detect bone fractures, including in complex joints with multiple bones, such as the wrist.

Enlitic adapted deep learning to automatically detect lung cancer nodules in chest CT images 50 per cent more accurately than an expert panel of thoracic radiologists, as found in a US trial looking at 1,000 people with cancer and 5,000 people without. The reduction of false negatives and the ability to detect early-stage nodules saves lives. The simultaneous reduction of false positives leads to fewer unnecessary and often costly biopsies, and less patient anxiety.

Enlitic benchmarked its performance against the publicly available, NIH-funded Lung Image Database Consortium data set, demonstrating its commitment to transparency.

Bone fractures – Enlitic has shown it is three times better at detecting extremity (e.g. wrist) bone fractures, which are very common yet extremely difficult for radiologists to reliably detect. Errors can lead to improper bone healing, resulting in a lifetime of alignment issues.

These fractures are often represented only by 4×4 pixels in a 4,000×4,000-pixel X-ray image, pushing the limits of computer vision technology.

In detection of fractures, Enlitic achieved 0.97 AUC (the most common measure of predictive modelling accuracy), more than three times better than the 0.85 AUC achieved by leading radiologists and many times better than the 0.71 AUC achieved by traditional computer vision approaches.

Enlitic was able to support analysis of thousands of image studies in a fraction of the time needed for a human to analyse a single study.

First, he says the Enlitic system did a 50 percent better job detecting malignant lung nodules than thoracic radiology experts. After training on images from 6,000 people—1,000 people with lung cancer, 5,000 people without—Enlitic used images from about 100 patients to test its system versus the human experts.

Second, Enlitic says its software fared much better than radiologists at finding hard-to-detect fractures in the wrist. It used an old study of about 100,000 images as its baseline. In 200 X-rays where the Enlitic system’s diagnosis diverged from the original report, two radiologists did a close inspection and found that the algorithm was correct 75 percent of the time, said Howard. “This shows we were more accurate overall than radiologists, although a combination of radiologist plus algorithm would be best of all, which is what we provide,” he said.

This partnership will bring together Enlitic's capability and Capitol's financial investment, its network of nearly 100 radiology centres (collectively conducting ~150,000 X-rays, CTs and MRIs per year) and its experience in the form of their archive of about one million patients' diagnosis and imaging data.

DermaCompare

检测黑色素瘤

MedyMatch

中风

推想科技

去年9月,在肺部X光的诊断环境中,最后生成诊断报告与医生的诊断报告匹配率在90%以上。“换一个测试环境,换一家医院的数据,结果可能又不一样。”

目前,推想科技已有的诊断模型数据源涵盖与心肺相关的近10种X光影像,如心影增大、肺部积液、肺炎等。

Diabetic Retinopathy Detection

Kaggle Competition

Breast Cancer-related Cell Detection

Deep max-pooling CNN to detect mitosis in breast histology images 2012 ICPR Mitosis Detection Contest

Brain Image Segmentation

Using CNN with Multi-modal MR Images for Brain Image Segmentation • To segment brain into CSF, WM and GM, multi-modal inputs(T1, T2, FA) are used • CNN is trained with multi-modal brain MR patches for patch classification • From MR images of 8 infants 10,000 patches are generated for training

wanghaisheng commented 8 years ago

在现有医疗应用之中,有一款应用,它使用了云人工智能技术,利用手机摄像头进行「全身摄影(Total Body Photography)」,检测黑色素瘤。这款应用就是 DermaCompare,可以在苹果商店里免费下载,由一家以色列公司 Emerald Medical Applications(MRLA)开发,已向 FDA 注册。这款应用符合 HIPPA(健康保险流通与责任法案,保护病患隐私),Emerald Medical 的首席执行官 Lior Wayne说道,下载了这款应用,利用专利比较算法,每个用户或医生就可以通过任何智能手机的摄像头来确认某种痣是否为黑色素瘤征兆。Wayne相信,DermaCompare 是唯一一个使用了人工智能的皮肤癌诊断应用。

wanghaisheng commented 8 years ago

  基于血液样本诊断癌症是非常具有挑战性的事情。通常,医生会通过向样本中添加化学物质使癌细胞可见,但这样也会使该样本不能在其他测试中继续使用。其它的诊断技术则是基于癌细胞的异常结构,但这会消耗更多的时间,而且可能会将健康的畸形细胞错误地识别成癌细胞。   如今加州大学洛杉矶分校的研究人员已经开发出了一种新技术,通过特殊的显微镜和人工智能算法相结合,可以无损地识别样本中的癌细胞。该技术不仅可以减少诊断癌症的时间和精力,同时也是精密医学领域的重大成就。 微镜

  该技术中使用的显微镜被称为光子时间延伸显微镜,可以将纳秒级脉冲的光分解成每秒可捕捉数十万张图像的多条光线。这些图像被输入一个计算机程序,按照细胞的16种不同物理特征进行分类,如直径、圆度和吸收光量。

  通过使用一组经分析过的图像,研究人员利用深度学习技术训练计算机程序来识别癌细胞。经过几轮测试后,研究人员发现该系统的识别能力相比已有的分析工具提升了至少17%。研究人员相信他们的方法会引领产生更多的数据驱动的癌症诊断系统。

  通过分析病人的基因,深度学习技术已经被用于帮助诊断疾病。既然这一技术可以识别其他方法可能无法识别的癌细胞,那么也很可能会帮助研究人员更好地理解引发癌症的基因突变,从而创造出治疗癌症的新方法。

wanghaisheng commented 8 years ago

总部位于特拉维夫的MedyMatch科技有限公司正在为病人护理的关键领域开发人工智能平台, 从而比肉眼更快、更准地研究数据,并帮助医生做出各种健康疾病的诊断决定。该公司希望能在2017年上半年推出可进入市场的产品。

中风患者是MedyMatch的首批重点对象。

“在中风治疗中,速度至关重要。” MedyMatch董事长兼首席执行官吉恩•萨拉格尼斯(Gene Saragnese)在一次采访中表示,“因为过去的每一分钟都有脑细胞死亡。”

治疗中风时,医生需要解决的第一个问题是知道他们要治疗的是哪一种中风:脑出血还是由于堵塞而导致血液无法流进大脑?这两种中风的治疗截然不同,错误的诊断和治疗可能会导致重要脑细胞的死亡。

萨拉格尼斯说:“我们的目标是在患者最开始中风时做出更精确的决策,因此可对他们迅速采取正确的疗法。”

该产品是一款软件,可从普通断层扫描仪中提取图像,在云端利用MedyMatch开发的专有算法对其进行处理,在图像上做笔记,为医生标出重点,这样他们就可以立即看到可能出血的位置,并将处理后的图像和原图一起发回医生的工作台。

萨拉格尼斯表示,借助这一过程,医生有望能在三到五分钟内给出专业意见。通过深度学习技术,向计算机导入系列图例,从而设定读图基准。随后把系列图片上传到计算机,计算机将能从中“学到”流血的样子。也就是说,“你用图例培训计算机,经过这种训练后,计算机就能开始自己阅读图像”。萨拉格尼斯曾任飞利浦影像系统公司(Philips Imaging Systems)首席执行官,于今年二月份出任MedyMatch首席执行官。

通过与以色列和美国的医院合作,包括耶路撒冷的哈达萨医学中心和波士顿的麻省总医院,MedyMatch已从数百万个病例中获得数十亿张图像。萨拉格尼斯说:“我们的专家就来自这些地方,由他们帮助我们训练软件的读图能力。”

根据美国心脏协会公布的数据,中风是美国第四大杀手,而治疗中风的成本可能将从2010年的716亿美元增加到2030年的约1830亿美元。以色列紧急医疗中心网络TEREM战略发展主管加布里埃尔•波利埃克博士(Gabriel Polliack)表示,尽管医学影像技术有所进步,但医疗误诊率几十年来一直在30%左右徘徊。

波利埃克说:“现在有必要在市场上推出一款充当放射科和内科医生另一双眼睛的产品,帮助他们克服各种有碍于作出正确诊断的限制。” 波利埃克是MedyMatch医学顾问委员会的成员,自公司在两年多以前成立起来一直积极为公司出谋划策。

“这个想法很好,不仅具有重要的临床价值,即改善患者的治疗结果,还将直接影响护理成本。”波利埃克说,“这意味着MedyMatch正在树立医疗行业的黄金标准——在改善患者疗效的同时降低成本。”

萨拉格尼斯表示,MedyMatch在今年早些时候完成了200万美元的首轮融资,目前正在进行新一轮800万美元的融资,其产品将需要获得美国食品与药物管理局(FDA)和其他批准。

“显然,中风是一个全球性的问题,对公司来说,这也是一个面向全球的机会。”他说,“现在也有人在做图像机器学习,但主要用于癌症。实际上,目前还没有人进入中风这一特定领域,所以竞争对手不是很多。”

纽约数据公司CB Insights表示,过去五年来,人工智能初创公司的交易增加了近7倍,从2011年第一季度的四笔交易增加至今年第一季度的27笔交易。第一季度约15%的交易都是由主打医疗保健人工智能应用的公司完成。

CB Insights科技行业分析师德帕斯里•瓦拉德哈拉汗(Deepashri Varadharajan)在邮件中写道,自2010年以来,人工智能公司共筹得9.67亿美元,资金流向13个国家和10个行业类别,包括商业智能、电子商务和医疗保健。

瓦拉德哈拉汗说:“具体来说,医疗保健领域利用人工智能处理大量医学数据,预测风险,让诊断变得更加准确。”

但萨拉格尼斯表示,MedyMatch也面临着挑战,其中一个关键问题是需要确保云基础设施行之有效,把图像从医院上传至云端。这种基础设施是IBM、通用电气和飞利浦等巨头公司的重点所在。“这些都是开发云基础设施的公司,而我们希望能在将来与他们比肩。”

另一个需要克服的挑战就是医生可能会对技术感到有压力。《以色列时报》通过电话采访了以色列佩塔提克瓦拉宾医学中心中风神经学和介入神经放射专家盖伊•拉斐利博士(Guy Raphaeli),询问他对此事的看法。他表示:“这项技术看起来很有趣,可用作额外工具,增强医生的信心。”在此之前,拉斐利并不知道MedyMatch的存在。

“该技术也可用于缺乏专业知识的农村和偏远医院。但我认为计算机不可能代替医生的临床技能,他们可以触摸和理解病人,结合患者的整体病情。”拉斐利说,“我认为以色列不需要这类技术,因为我们的工作信心很高,而且很多医院是相互联系的,所以医生可以在需要时寻求彼此的帮助。”

萨拉格尼斯表示,确实,产品面向的客户未必是大型综合医院,那里不缺专家,但可以是规模较小的农村或社区医院,那里的医生的经验可能较为不足。“比如在中国一家乡村医院,这可能会成为一个工具,帮助缺乏经验的医生读图。该技术可以把这些医院设为目标群体。”

萨拉格尼斯说,MedyMatch现在正在寻找多种盈利模式,其中一种是订阅。“每次使用的费用不到10美元,价格很低

wanghaisheng commented 8 years ago

  3MMICLOUD定位于新一代互联网+医疗及医疗大数据产业,致力于海量医学影像的云服务、云存储和 智能阅片。企业定位是做医生的好帮手、患者的好参谋、医院的好伙伴。企业长期目标是成为在医学影像界、医疗界以及大数据界具有先驱影响力的企业,以技术驱动的互联网+医疗平台型企业。现在互联网+医疗面临的三个特点市场:1、市场大,每年千亿美金市场;2、高技术,门槛高,只有极少数企业能做;3、前景广,打通医疗大数据,进入万亿级别市场。

  3MMI CLOUD云服务及智能阅片是全球第一个无需任何形式软件交互的医疗影像分析服 务平台,提供全自动的人工智能医疗影像阅片,客观、量化、实时地解读病人影像,并提供云存储及云服务。

  智能阅片可以解决三个痛点:

  1、病理科医生不足,阅片量大;

  2、误诊率高,医生阅片水平参差不齐;

  3 、数据存储成本及安全。

  这些痛点解决后将会大大提高我们医疗的效率,对医生负责,对病人负责!

  3MMI CLOUD云服务及智能阅片不仅赶上了这个时代的市场前景,还能解决现代传统医疗存在的弊端,给医疗这块提供了一定的便捷服务,这是医疗行业创新的奇迹;现代传统医疗将会被互联网+医疗颠覆时代将要来临!这值得我们深思!

wanghaisheng commented 8 years ago

http://www.datamorrow.com/ 推想科技是一家将人工智能用于医疗服务的科技公司,基于医学影像数据,并分析对比对应的临床资料,包括临床报告和实验室研究数据,挖掘数据背后的关联性,提供医疗解决方案。

深度学习可以从端到端地处理海量医学影像数据(X光、CT扫描等)。通过学习过去的医学影像数据,并分析对比对应的临床资料,包括临床报告和实验室研究数据,挖掘数据背后的关联性。人工智能自动学习和沉淀医生的诊疗技术,分析和识别医疗影像上的病变,推荐治疗方案,协助医生诊断,减轻医院医生的负担,降低漏诊误诊的概率。 我们致力于理解患者和医护人员的切实需求,不断改进和完善我们的技术,提供医疗解决方案。我们的数据科学家与放射和临床专家深入合作,以确保我们的技术能够为他们提供精准的人工智能辅助诊断服务,为病人提供高效、一流的医疗体验,从根本上提高诊断效率,优化临床工作流程。

推想科技
加入推想
联系我们

加入推想 秉承从数据中发现价值的理念,推想团队专注于智能医疗领域。我们是一个由年轻人组成的奋斗向上的团队,无论工作、学习还是生活,这里都充满了新想法、热情与活力。对技术创业有热情,却找不到组织吗?加入推想吧,这会是一个崭新的开始。简历请发送至:hr@infervision.com。

图像算法处理研究员

工作职责

1.参与核心医学模型算法设计

2.将图像模型应用于医疗大数据运算

3.参与医学相关图像处理论文发表 技能经验

1.图像处理或电脑视觉分析专业背景,硕士以上有工作经验,或应届博士毕业生

2.基于linux操作系统实现对以下机图像的处理

3有较深厚的图像处理专业背景或相关工作经验优先,图像增强、去噪、特征提取、图像匹配、图像分割、图像检索,熟悉opencv等开源库

4.参与过大型医学影像设备的图像处理或熟悉深度学习算法者优先考虑

5.熟悉python、C/C++程序设置,了解基础计算机理论算法,有良好的编码习惯,善于和他人合作,有责任心 优先技能

1.熟悉GPU原理,拥有CUDA开发经验

2.拥有分布式运算、存储系统的设计 Copyright©深圳前海花兰金融服务有限公司 粤ICP备14074646号

深度学习算法研究员

工作职责

1.参与核心医学模型算法设计

2.将图像模型应用于医疗大数据运算

3.参与医学相关图像处理论文发表 技能经验

1.机器学习(Machine Learning)或深度学习(Deep Learning)专业背景

2.基于 Linux 操作系统实现对大数据进行分类及分析

3.有较深厚的机器学习或深度学习知识: 特征提取、数据分类(classification)、搭建卷积神经网络(CNN)或者深度神经网(DBN),熟悉Caffe,theano,keras, lasagne, torch等开源库

4.参与过图像处理或熟悉卷积神经网络图像处理优先

5.熟悉python, C/C++ 程序,了解基础计算机理论算法,有良好的编码习惯,善于和他人合作,有责任心 优先技能

1.熟悉GPU原理,拥有CUDA开发经验

2.拥有分布式运算、存储系统的设计

数据可视化工程师

工作职责

1.支持深度学习工程师完成模型产品前端可视化和结果呈现等工作 技能经验

1.计算机、统计或相关专业本科以上学历,1-3年开发工作经验

2.熟悉JavaScript、Ajax、Jquery等相关Web技术

3.掌握 除jquery外至少一种前端框架,如AngularJS,React等

4.了解基本数据结构,对前后端数据交互有一定的了解

5.具备D3,webGL等前端图形开发工具经验 优先技能

1.熟练使用python

2.对机器学习、深度学习有所接触

但据v2ex上的招聘贴官网应该是http://www.infervision.com/

推想科技专注于智能医疗领域,利用深度学习算法实现医疗影像诊断。创始团队来自于芝加哥大学、麻省理工大学、杜克大学等多家美国知名学府,长期从事机器学习技术在人工智能、医疗影像与图像识别、金融统计等领域的前沿应用。

推想团队以技术执行力为核心文化,希望能和同样对机器学习/人工智能领域有热情的伙伴们一同成长,通过人工智能算法进一步推动医疗资源的优化配置。

由于团队发展需要,现诚招数据可视化工程师、数据库开发 ETL 工程师、图像处理算法研究人员!欢迎大家自荐或者推荐朋友:)

数据可视化工程师

    岗位职责:
        支持深度学习工程师完成模型产品前端可视化和结果呈现等工作

    任职要求:
        计算机、统计或相关专业本科以上学历,1-3 年开发工作经验;
        熟悉 JavaScript 、 Ajax 、 jQuery 等相关 Web 技术;
        掌握除 jQuery 外至少一种前端框架,如 AngularJS , React 等;
        了解基本数据结构,对前后端数据交互有一定的了解;
        具备 D3 , webGL 等前端图形开发工具经验。

    优先技能:
        熟练使用 python ;
        对机器学习、深度学习有所接触

数据库开发 ETL 工程师

    岗位职责:
        负责公司医疗大数据分析平台的数据仓库开发工作
        适应不同医院数据库系统,为建模师和医生及时、准确地提供数据支持(数据源分析、采集、清洗、装载到数据仓库中)
        对建模师和医生数据需求进行分析,积极反馈
        参与公司数据产品的设计和开发

    任职要求:
        熟悉一种以上关系型数据库( SQL Server/My SQL 等),具备一年以上数据库 ETL 经验
        精通 SQL 语言并善于 SQL 优化
        具备良好的需求理解能力和沟通能力
        能根据用户数据需求,分析其中的业务关系,能挖掘其价值者优先
        有数据库中自然语言处理经验优先
        有 R 、 Python 数据编程设计及 Linux 操作系统经验优先

图像处理算法研究人员

    岗位职责:
        参与核心医学模型算法设计
        将图像模型应用于医疗大数据运算
        参与医学相关图像处理论文发表

    技能与经验:
        图像处理或电脑视觉分析专业背景;硕士以上有工作经验,或应届博士毕业生;
        基于 Linux 操作系统实现对下机图像的处理;
        有较深厚的图像处理专业背景或相关工作经验优先:图像增强、去噪、特征提取、图像匹配、图像分割,图像检索;熟悉 opencv 等开源库;
        参与过大型医学影像设备的图像处理或熟悉深度学习算法者优先考虑;
        熟悉 python, C/C++ 程序设置,了解基础计算机理论算法,有良好的编码习惯,善于和他人合作,有责任心;

    优先技能:
        熟悉 GPU 原理,拥有 CUDA 开发经验
        拥有分布式运算、存储系统的设计、开发经验

融资1100万 他生出放射科医生的阿尔法狗 机器学习出报告 覆盖超10种X光影像 去年9月,在肺部X光的诊断环境中,最后生成诊断报告与医生的诊断报告匹配率在90%以上。“换一个测试环境,换一家医院的数据,结果可能又不一样。”

目前,推想科技已有的诊断模型数据源涵盖与心肺相关的近10种X光影像,如心影增大、肺部积液、肺炎等。

wanghaisheng commented 8 years ago

Enlitic http://www.qdaily.com/cooperation/articles/toutiao/28036.html

一个分析 X 光片的硅谷公司,要来中国做生意了【好奇心日报】 好奇心日报 2016-06-11 16:55:21

人们或许有理由担心机器会替代人类更多的工作。因为越来越多的科技公司抱着这样的目的,投诸了无数的人力、资金和时间。

Enlitic 是其中的之一,它们试图让机器学会阅读 X 光片,从中发现可能被医生忽略掉的病灶。现在,为了让机器能够学习到更丰富的病例,这家公司准备在中国扩展业务了。

简单来说,Enlitic 的技术就是将不同种类的医学影像数据交给电脑,从这些 X 光片、CT 断层扫描和核磁共振扫描图像(MRI)中,机器要学会找到伤口、失调或是肿瘤。

它们的目的并不是代替医生,而是让医生将时间节省下来,更专注于诊断。X 光片从拍摄到保存数据,再到传输给医生的那一步里面,Enlitic 直接做了可能性分析,这个过程只需要几毫秒的时间。

Enlitic 的 CEO Igor Barani 博士告诉《好奇心日报(www.qdaily.com)》,他们的目的本来就不是替代医生的工作,而是让本来一天看 200 张 X 光片做判断的医生,现在可以一天看 400-500 张。

Jeremy Howard 是一位数据学家,他也是 Enlitic 的联合创始人,用他的的话来说,医学诊断的核心就是一个数据问题,你需要将扫描影像、实验检查结果、病人的病史等数据综合在一起,转化为医疗上的洞见。而最近机器学习领域的进展则展现了一种可能性:机器可以将大量的图片数据在短时间内转化为深度洞察,识别出那些微妙的变化。

这家公司成立还不到两年的时间,从成立之初,Enlitic 就与巴西、中国、印度以及美国的医疗机构寻求合作。从医疗机构、硬件厂商以及放射检查诊所那里,Enlitic 获得一批初始数据。2014 年 10 月,Enlitic 获得了 200 万美元的种子轮投资。一年之后,Enlitic 获得了一千万美元的 A 轮融资。

也正是从那个时候起,Enlitic 开始在澳大利亚和亚洲推广自己的技术。千万美元的 轮融资也是由澳大利亚的诊断成像服务商 Capitol Health 领投的。

当被问起在深度学习技术上,是不是拥有更多数据和资金的大公司就更有优势的时候,Igor 告诉《好奇心日报(www.qdaily.com)》:“是也不是。一个问题是现在虽然数据非常多,例如 Google,但大部分都是无序的,怎么将它们梳理好应用在不同的模型里面才更加重要。”

说白了,即使是大公司,也想用更少的数据完成更精确的判断。尝试用更少的数据达到和用海量数据相差不远的结果,这可以说是目前深度学习领域最明显的趋势。

Enlitic 目前已经从欧洲和澳洲的医疗机构拿到了很多 X 光片数据和对应的诊断结果。而现在将业务扩展到亚洲,目的也是为了让数据库更加完整。“前阵子我们才去北京 301 医院去谈合作,不过还没最终敲定。”Igor 告诉我们。

题图来自zmescience

wanghaisheng commented 8 years ago

http://www.cnbeta.com/articles/321915.htm 计算机也能看CT片子 自动给医生标出脑肿瘤 2014-08-24 11:17:31 6240 次阅读 1 次推荐 稿源:网易科技 0 条评论 [硬件]

据国外媒体报道,机器在接管越来越多原来由人类完成的工作,检测疾病可能会是它下一个要接手的任务。一家名为Enlitic的新公司将目光对准了检查室,希望利用计算机来根据图像进行诊断。Enlitic联合创始人兼CEO、数据挖掘创业公司Kaggle前总裁兼首席科学家杰里米·霍华德(Jeremy Howard)表示,公司的想法是,通过向计算机展示数百张X光片、MRI核磁共振图像、CT电脑断层扫描照和其它的胶片来教导它们如何识别不同的损伤、疾病和失调症。

计算机也能看CT片子 自动给医生标出脑肿瘤

他相信,只要有足够多的经验,计算机就能够开始识别病患的问题,快速给医生标示图像供其探究。这可以让医生省去大量的图像研究分析工作。

机器学习

随着高性能计算机变得越来越先进,算法变得更善于教导计算机识别模式,机器学习近年的应用呈现爆发式的增长。近年来,部分机器学习项目寻求以软件或者硬件的形式模拟人脑的运作方式——这种做法被称作“深度学习”。举例来说,向计算机展示足够多的黄色出租车在街上行驶的图片,它就能够开始识别它们,辨别它们是在街上还是在别的地方行驶。这正是Enlitic要实施的研究策略。

但霍华德指出,虽然机器学习在计算机视觉上的应用已经取得了很大的进展,但它在医疗领域的应用仍非常滞后。

Enlitic的理念在于,给计算机展示足够多的疾病图像,如脑肿瘤,它就能够开始自动地给医生标出脑肿瘤所在。

霍华德指出,医疗状况的图像看起来通常都很一致,这应该有助于机器学习。黄色出租车可以出现在各种环境中,而胸部X光片的角度、位置和颜色看起来则基本一样。这可以简化识别图像之间的重要区别的过程,比如识别含有肿瘤的那张图像。

霍华德说,鉴于完成全面诊断远不止是要确定在图像中的搜寻目标,医生可能需要使用Enlitic来扫描不断更新的庞大数据库,获取特定方面的所有图像,如与特定病人的肝脏相似的所有肝脏图像。“我指的并不是像素类似的图像,而是根据深度学习算法得出类似的预期结果和有用干预措施的图像。”他说道。

机器学习技术最近的进展说明,理论上,计算机可以根据病患行为模式来获得有用信息——病痛中的病人的声音是怎样的,又或者伤口被触摸时病人是出现多大程度的畏缩。霍华德认为,最终,这种数据可能会结合Enlitic的计算机视觉技术使用,从而大大提高诊断的速度和准确性。

提高疾病检测效率

Enlitic要进入的领域并非未曾被探索。2011年,斯坦福大学的研究人员称,经他们训练过的计算机在分析乳腺癌的显微图像上比人类要准确。

此外,一些计算领域的巨擘也已经投入了相当多的资源去探索医疗领域。例如,IBM的超级电脑沃森(Watson)正帮助德州大学的马里兰安德森癌症研究中心对超过10万位病人的医疗图表和病史进行模式识别。微软也推出了InnerEye计算系统来分析医疗图像和识别病程进展。

目前,所有的那些机器都还需要人工操作,不过Enlitic希望它在研究的那些机器将能够大大提升疾病检测速度。

“我们并不是要替代放射科医师。”霍华德说道,“我们是想给予他们所需的信息,让他们能够将工作效率提高十倍。”

wanghaisheng commented 8 years ago

Enlitic couples deep learning with vast stores of medical data to advance diagnostics and improve patient outcomes

Data-Driven Medicine
Deep Learning

Data-Driven Medicine

All clinical diagnosis is based on data. All clinical diagnosis is based on data. Every time a doctor sees a patient, she is solving a complex data problem. Symptoms, patient history, lab test results, medical images, comparison with other patient cases, the list of possible diseases or ailments, the treatment options -- all of these are forms of data that must be remembered, understood, and integrated correctly.

There is a lot of medical data to comprehend. There are over 12,000 medical diagnoses, each with numerous different treatment options, in the International Statistical Classification of Diseases and Related Health Problems (ICD-10). At the health system level, the volume of digital medical data is growing rapidly: from 500 petabytes in 2012 it is expected to be an astonishing 25,000 petabytes by 2020 (IDC Health Insights Report, 2013). This growth is driven by medical imaging, laboratory testing, electronic health records, and increasingly available genomic data. Unfortunately, much of this data is not being used to improve the diagnoses and treatments of patients.

Past attempts at data driven medicine have been disappointingly limited. Medical interpretation programs have not lived up to their promise. They have suffered from a myriad of problems, including slow workflows, inadequate accuracy, and lack of scalability to handle vast amounts of data. "Existing tools don't necessarily synthesize available data in a useful way to impact treatment paradigms," notes a doctor at a leading medical center. It is no surprise that the National Institute of Medicine estimates that 12 million Americans suffer from misdiagnoses - resulting in the unnecessary loss of lives and billions of dollars in cost (Improving Improving Diagnosis in Health Care, 2015).

Today we are on the cusp of an exciting revolution of improved healthcare. Deep learning enables data driven medicine at scale. Listed as one of MIT Tech Review's Top 10 Breakthroughs of 2013, deep learning has completely revolutionized how computers handle massive amounts of big data. Thanks to deep learning, computer performance is as on par with humans at some of the hardest tasks in medicine (e.g., segmenting the human brain) -- and performance continues to improve rapidly. 12,000 + Possible medical diagnoses

ICD-10 codes published by WHO 12M + Americans who suffer from misdiagnoses

National Institute of Medicine estimates Digital medical data growth

Petabytes of data

"Deep learning is an algorithm inspired by how the brain works, and as a result it is an algorithm which has no theoretical limitations. The more data you give it, and the more computation time you give it, the better it becomes."

Jeremy Howard, Enlitic Founder in 2014 TED.com talk. Deep learning

Deep learning is revolutionizing what is possible for computers to achieve. It has become a breakthrough tool across financial, communication and online consumer industries. For example, deep learning allows Spanish and English speakers to communicate in real time via automated translation, and is defining the future of driving through driverless cars. With Enlitic, doctors can for the first time use the predictive power of deep learning to directly improve patients' medical outcomes.

How does it work? Before deep learning, engineers had to create a set of rules to manually select a tiny subset of the massive amount of information available. The resulting software was only as powerful, comprehensive, and flexible as this set of rules. In contrast, deep learning automatically examines all of the available information to automatically discover, without human intervention, which parts are informative for the task at hand.

Enlitic is using deep learning to usher in a new era of data-driven medicine. The company's tool aims to augment doctors and make it possible to distill actionable diagnostic insights in real time from millions of prior patient cases and other medical data. Deep learning increases what each individual doctor can achieve, multiplying their effectiveness. In initial benchmarking test against the publicly-available LIDC dataset, Enlitic technology detected lung cancer nodules in chest CT images 50% more accurately than an expert panel of radiologists In initial benchmarking tests, Enlitic's deep learning tool regularly detected tiny fractures as small as 0.01% of the total x-ray image Enlitic's deep learning tool is designed to simultaneously support many diseases Enlitic's Premier Partner Program helps select medical institutions adopt data-driven medicine to improve patient outcomes. In collaboration with Enlitic's deep learning experts, Premier Partners will lead the way in ushering in a new era of truly personalized patient care. Enlitic

Home
Our Solution
Our Science
Our Story
News
Careers

Contact us

10 Jackson St. San Francisco, CA 94111

Partnership: partner@enlitic.com

General info: info@enlitic.com

Press inquiries: press@enlitic.com 2015 © Enlitic, Inc. All rights reserved.

wanghaisheng commented 8 years ago

Deep Genomics uses deep learning networks to predict how both natural and therapeutic genetic variation changes cellular process such as DNA-to-RNA transcription, gene splicing, and RNA polyadenylation. Applications include better understanding of diseases, disease mutations and genetic therapies. Spidex, their first product, provides “a comprehensive set of genetic variants and their predicted effects on human splicing across the entire genome. “

wanghaisheng commented 8 years ago

In Radiology, Man Versus Machine Alongside Enlitic, Merge Healthcare has partnered with IBM to introduce several AI tools. Work is underway to make them commercially available. For example, the company has developed an iPhone scanner that can diagnose mole malignancies with 90% accuracy, Tolle said.

In addition, this year, Merge plans to introduce a disease-specific audit service that offers more detailed – and searchable – information about cardiovascular disease, cancer, and chronic obstruction pulmonary disease. An EMR summarization tool is also in the works to help radiologists and cardiologists identify what information they might need from a patient’s record to better understand their diagnostic images. Through the partnership, they also plan to introduce a smart MRI that can analyze entire images and pinpoint problems that need a radiologist’s immediate attention.

wanghaisheng commented 8 years ago

Computers learning to find Australian cancers and broken bones that people miss

A deal signed today means a 'deep learning system' will soon help Australian radiologists to find cancers and breaks that are often missed, and to ignore lumps that don't matter. Then it will bring modern medical diagnostics to developing countries where radiologists are in short supply.

In a global first, Melbourne-headquartered radiology business Capitol Health announced this morning that it will implement the machine-learning system developed by Silicon Valley start-up Enlitic.

Founded by Melbourne serial entrepreneur Jeremy Howard, Enlitic has created computer learning systems that can take millions of scans, tests and medical records and learn from them to help doctors rapidly diagnose problems.

"This is the beginning of a transformation of global health services," says Jeremy.

Radiologists view hundreds of X-rays and other medical images every week looking for the unusual. Sometimes they're looking for something they've never actually seen before. Sometimes they're looking at something that's just four pixels in a two-million-pixel image.

"The new system will learn from a million scans held by Capitol.

"And it will keep learning from every ultrasound, CT, MRI, PET, and X-ray we perform," says Capitol Managing Director John Conidi.

"Within a year this system will be implemented across our clinics. Our radiologists will be able to work faster, provide more accurate results and save more lives. Many unnecessary, expensive and dangerous procedures will be avoided," he says.

"This system will transform Western healthcare," says Jeremy Howard. "The more data and computing time it gets, the more it learns and the more accurate it becomes. Eventually it will handle lab tests, patient histories, and genomic information. It will take much of the guess work out of medicine.

"In developing countries our impact will be even more profound. Most medical images are never seen by a doctor. Our system will enable a remote health worker to do an ultrascan and get a result in minutes."

How Enlitic works

You need a chest X-ray; it's doctor's orders. Is it pneumonia? Or maybe lung cancer?

What if your radiologist could draw on the collective wisdom of hundreds of other health professionals, thousands of patient case studies, and millions of medical images? And do so in a matter of minutes?

Data scientist, entrepreneur and Melbourne-boy-made-good Jeremy Howard developed Enlitic's 'deep learning' algorithm, connecting complex layers of medical and anatomical information, inspired by the function and interconnection of the human brain.

Enlitic is an example of machine learning, which brings together huge amounts of data and the ability of modern computing to crunch the numbers and make the connections. If you give the algorithm a stack of images—such as X-rays, CTs, MRIs and ultrasound scans—and the accompanying diagnoses, it learns the patterns. With enough base data, it can rapidly process and recognise a new medical image and predict the diagnosis. The more data you give it, the better it becomes: it literally learns.

Enlitic's technology is already used to help radiologists detect and diagnose the early signs of lung cancer and detect bone fractures, including in complex joints with multiple bones, such as the wrist. Enlitic's algorithm currently draws on an archive of about one million patients.

The evidence that Enlitic works

Lung cancer kills 80-90 per cent of all patients diagnosed in late-stages; this is one of the hardest cancers to detect in medical images. If caught early, survival is nearly 10 times more likely.

Enlitic adapted deep learning to automatically detect lung cancer nodules in chest CT images 50 per cent more accurately than an expert panel of thoracic radiologists, as found in a US trial looking at 1,000 people with cancer and 5,000 people without. The reduction of false negatives and the ability to detect early-stage nodules saves lives. The simultaneous reduction of false positives leads to fewer unnecessary and often costly biopsies, and less patient anxiety.

Enlitic benchmarked its performance against the publicly available, NIH-funded Lung Image Database Consortium data set, demonstrating its commitment to transparency.

Bone fractures – Enlitic has shown it is three times better at detecting extremity (e.g. wrist) bone fractures, which are very common yet extremely difficult for radiologists to reliably detect. Errors can lead to improper bone healing, resulting in a lifetime of alignment issues.

These fractures are often represented only by 4×4 pixels in a 4,000×4,000-pixel X-ray image, pushing the limits of computer vision technology.

In detection of fractures, Enlitic achieved 0.97 AUC (the most common measure of predictive modelling accuracy), more than three times better than the 0.85 AUC achieved by leading radiologists and many times better than the 0.71 AUC achieved by traditional computer vision approaches.

Enlitic was able to support analysis of thousands of image studies in a fraction of the time needed for a human to analyse a single study.

Why Enlitic is needed

Interpreting medical images can be incredibly challenging. Radiologists require years of training and there simply aren't enough of them with enough time to view the many medical images ordered by doctors.

Doctors can also make mistakes, more so later in the day due to tiredness. And less-experienced radiologists tend to make more mistakes. This can lead to unnecessary and invasive medical interventions that are expensive and distressing for patients.

Enlitic's capability won't replace radiologists, it will make it much faster for them to do their work.

Another benefit of the machine learning approach is that Enlitic is learning about what is normal and healthy alongside what is pathological. This information is transferable, providing the foundation for the application of the technology to other afflictions.

Medical imaging technologies are getting cheaper and more portable. But having a medical imaging machine is not all that is needed for an accurate diagnosis. Places like India, for example, have relatively few trained radiologists, so X-rays may never be looked at by a radiologist, but just reviewed by technicians or nurses. Even in developed countries, it may be nursing or other health staff who first view a diagnostic image, with hours or even days before the expert eye of a radiologist gets to view it. In time-critical health conditions, this can cost lives.

This is particularly crucial for the developing world. The World Economic Forum has estimated it will take hundreds of years to train enough experts to meet the professional healthcare needs of the developing world, including radiology.

All the data is there, it just isn't connected. This technology will change this, so that a scan ordered to detect one condition may in practice find something else. For example, a lung X-ray may be looking for pneumonia, but analysis through Enlitic may find a rib fracture.

What does the partnership mean?

This partnership will bring together Enlitic's capability and Capitol's financial investment, its network of nearly 100 radiology centres (collectively conducting ~150,000 X-rays, CTs and MRIs per year) and its experience in the form of their archive of about one million patients' diagnosis and imaging data.

"Every image improves the system," says Jeremy.

Radiology is the first step in Jeremy's 25-year plan. Ultimately, he wants to bring further data into Enlitic to work with other medical images, such as slides from pathology, images of the eye to support ophthalmology and optometry, patient notes and genomic information.

Consequently, this may become a tool that gives both a diagnosis and a prognosis, due to its ability to simultaneously incorporate and cross-reference a host of medical indicators.

wanghaisheng commented 8 years ago

2016-06-22,医学影像公司 Imagen Technologies, Inc.(New York, NY, 10021) 融资 $5,000,011 美元,基于人工智能、计算机视觉和机器学习等技术研发自动诊断技术,减少错误的诊断。每年世界上有数亿人被误诊,造成对病人的伤害和不必要的成本。Imagen 期望打造一个没有误诊的世界 Imagen Technologies https://www.imagentechnologies.com/ Imagen Technologies Each year hundreds of millions of people across the world are misdiagnosed. Diagnostic errors are a leading cause of patient harm and unnecessary costs, making it one of the greatest problems in healthcare.

Imagine a world where all patients are diagnosed instantly by leading experts. A world where all patients are treated correctly because they are diagnosed correctly. This is the world Imagen is building.

Imagen is putting leading medical expertise in the hands of healthcare providers everywhere. We are a team of doctors and technologists from world-class organizations working side-by-side to improve the accuracy and efficiency of the diagnostic process. We are beginning by applying the latest advances in computer vision and machine learning to musculoskeletal radiology.

Our medical team is comprised of leading radiologists and surgeons from Hospital for Special Surgery, the number one musculoskeletal provider in the U.S., as well as Mayo Clinic and other top-ranked hospitals. Our machine learning advisors are internationally-recognized thought leaders including: Dr. Richard Zemel (University of Toronto); Dr. Michael Mozer (University of Colorado, Boulder), and Dr. Serge Belongie (Cornell Tech).

wanghaisheng commented 8 years ago

Up to Speed on Deep Learning in Medical Imaging https://medium.com/the-mission/up-to-speed-on-deep-learning-in-medical-imaging-7ff1e91f6d71#.mgyhsyr91

wanghaisheng commented 8 years ago

Deep Learning for Medical Image Segmentation https://arxiv.org/pdf/1505.02000.pdf

wanghaisheng commented 8 years ago

http://cs.adelaide.edu.au/~dlmia/index.html http://www.miccai2016.org/en/ MICCAI 2016, the 19th International Conference on Medical Image Computing and Computer Assisted Intervention, will be held from October 17th to 21st, 2016 in Athens, Greece. MICCAI 2016 is organized in collaboration with Bogazici, Sabanci, and Istanbul Technical Universities.

wanghaisheng commented 8 years ago

http://www.deepcare.com/ DeepCare获峰瑞资本600万人民币天使投资,深度学习+医学影像革新疾病筛查和诊断 http://36kr.com/p/5048181.html http://36kr.com/p/5048105.html 在崛起的医学影像市场另辟蹊径,雅森科技运用SPM定位精准分析

wanghaisheng commented 7 years ago

DeepMind与NHS合作用深度学习发现眼疾早期征兆】《 Google DeepMind pairs with NHS to use machine learning to fight blindness| The Guardian》by Alex Hern O网页链接https://www.theguardian.com/technology/2016/jul/05/google-deepmind-nhs-machine-learning-blindness

wanghaisheng commented 7 years ago

The Future of Artificial Intelligence, June 23, 2016
https://aifuture2016.stanford.edu/agenda

wanghaisheng commented 7 years ago

http://zongwei.leanote.com/post/Pa 图像分割 是否可以用来从胶片中将人体组织内容切出来 只剩下文本等患者信息 @littlePP24

wanghaisheng commented 7 years ago

Repository for tackling Kaggle Ultrasound Nerve Segmentation challenge using Torchnet. http://blog.qure.ai/notes/ultrasound-nerve-segmentation-using-torchnet
https://github.com/qureai/ultrasound-nerve-segmentation-using-torchnet

KangolHsu commented 7 years ago

招人?

wanghaisheng commented 7 years ago

@KangolHsu 你对我们感兴趣? 欢迎邮件至wanghaisheng@clearofchina.com

wanghaisheng commented 7 years ago

@KangolHsu 咋不见简历呢

KangolHsu commented 7 years ago

Sorry,正在对照贵司招聘要求充电呀 @wanghaisheng

wanghaisheng commented 7 years ago

@KangolHsu 来实习吧 边学边练

wanghaisheng commented 7 years ago

Predicting lung cancer April 10, 2017 https://eliasvansteenkiste.github.io/machine%20learning/lung-cancer-pred/

wanghaisheng commented 7 years ago

医学是一个严肃的事情,其诊断过程较为复杂,但大体可能分为四步:搜集病人临床信息;对病人信息进行整理和分析;基于信息的整理和分析做出初步的诊断、假设;根据搜集的信息提出检测方法,然后用该检测方法证实之前的假设,排除一些可能性低的假设,用朱颖的话说就是“用新技术来确认自己的诊断、想法。” 未来50%的医生将被人工智能替代,且在三个层面进行:

第一个层面是50%的医生的50%工作将被替代,包括患者后期随诊、语音识别、影像识别。

第二个层面是50%单一识别性、分析性的科室可能被人工智能替代,如影像科、病理科。

第三个层面是50%低级别、不主动拥抱科技的医生有可能被替代掉,“这些医生将需要AI来辅助其进行诊断,而真正优秀的医生会把AI当成很好的朋友和助手,提供自己的诊疗效率。”

http://weibo.com/ttarticle/p/show?id=2309614097972756419482&u=1998662112&m=4097963795538921&cu=1998662112

wanghaisheng commented 7 years ago

2017-04-24,快速癌症检测技术公司 BIODESIX INC (Boulder, CO, 80301) 完成融资 $13,165,482 美元,研发基于血液的早期疾病检测技术,医生根据患者独特的分子特征对治疗作出更准确的诊断和更明智的决定,制定更好的治疗方案。通过血清蛋白的鉴定,也识别晚期黑色素瘤患者。#美国融资快讯#(新闻机器人@编形金刚)

JenifferWuUCLA commented 6 years ago

Hi,“人体肺部结节智能诊断”,请问测试数据集中.mhd肺部3D图像的结节的中心坐标 x, y, z 的数值,和疑似结节为真正结节概率,如何预测得出? 我使用了U-net图像分割,但是U-net中深度训练的是已经切片的2D图像,这样预测到的坐标也是2D的,因为submissions.csv文件中需要提交3D的结节x, y, z坐标,请问这个如何实现?

wanghaisheng commented 6 years ago

@JenifferWuUCLA 实在不好意思 天池这个比赛我没有关注。。不了解详情

wanghaisheng commented 6 years ago

https://github.com/Kinpzz/Deep-Learning-on-Medical-Image/issues/8

wanghaisheng commented 6 years ago

@https://github.com/vessemer/LungCancerDetection https://github.com/JenifferWuUCLA/pulmonary-nodules-deep-networks https://github.com/dereknewman/cancer_detection This is the source code for my part of the 2nd place solution to the National Data Science Bowl 2017 hosted by Kaggle.com. For documenation about the approach go to: http://juliandewit.github.io/kaggle-ndsb2017/

wanghaisheng commented 6 years ago

https://github.com/alegonz/kdsb17 https://github.com/lfz/DSB2017 https://github.com/lucashu1/CAIS-Data-Science-Bowl-Demo

This is an attempt at the classification task featured in the Kaggle Data Science Bowl 2017. The task consists on predicting from CT lung scans whether a patient will develop cancer or not within a year. This is a particularly challenging problem given the very high dimensionality of data and the very limited number of samples.

The competition saw many creative approaches, such as those reported by the winning entries here (1st place), here (2nd place) and here (9th place). These approaches have in common that:

wanghaisheng commented 6 years ago

https://github.com/YiYuanIntelligent/3DFasterRCNN_LungNoduleDetector https://github.com/HPI-DeepLearning/LUCAD https://github.com/jeetmehta/Lung-Cancer-Classification https://github.com/njkaiser/EECS349_Project https://github.com/jmendozais/lung-nodule-detection