US9633306B2  Method and system for approximating deep neural networks for anatomical object detection  Google Patents
Method and system for approximating deep neural networks for anatomical object detection Download PDFInfo
 Publication number
 US9633306B2 US9633306B2 US14/706,108 US201514706108A US9633306B2 US 9633306 B2 US9633306 B2 US 9633306B2 US 201514706108 A US201514706108 A US 201514706108A US 9633306 B2 US9633306 B2 US 9633306B2
 Authority
 US
 United States
 Prior art keywords
 neural network
 deep neural
 trained deep
 nodes
 approximation
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active, expires
Links
 230000001537 neural Effects 0.000 title claims abstract description 338
 239000010410 layers Substances 0.000 claims description 217
 239000011159 matrix materials Substances 0.000 claims description 40
 239000002585 bases Substances 0.000 claims description 23
 238000000513 principal component analysis Methods 0.000 claims description 20
 238000007670 refining Methods 0.000 claims description 12
 230000000694 effects Effects 0.000 claims description 11
 238000004590 computer program Methods 0.000 claims description 9
 HPNSNYBUADCFDRUHFFFAOYSAN chromafenozide Chemical compound data:image/svg+xml;base64,<?xml version='1.0' encoding='iso-8859-1'?>
<svg version='1.1' baseProfile='full'
              xmlns='http://www.w3.org/2000/svg'
                      xmlns:rdkit='http://www.rdkit.org/xml'
                      xmlns:xlink='http://www.w3.org/1999/xlink'
                  xml:space='preserve'
width='300px' height='300px' viewBox='0 0 300 300'>
<!-- END OF HEADER -->
<rect style='opacity:1.0;fill:#FFFFFF;stroke:none' width='300' height='300' x='0' y='0'> </rect>
<path class='bond-0' d='M 27.9661,214.876 L 38.1721,188.926' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 38.1721,188.926 L 20.8012,167.112' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 39.9292,182.18 L 27.7696,166.91' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-28' d='M 38.1721,188.926 L 65.7489,184.789' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-2' d='M 20.8012,167.112 L 31.0072,141.161' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-3' d='M 31.0072,141.161 L 13.6364,119.347' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-4' d='M 31.0072,141.161 L 58.5841,137.025' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-4' d='M 35.9711,146.056 L 55.2749,143.161' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-5' d='M 58.5841,137.025 L 75.9549,158.839' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-6' d='M 75.9549,158.839 L 103.532,154.702' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-27' d='M 75.9549,158.839 L 65.7489,184.789' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-27' d='M 69.2339,160.69 L 62.0897,178.855' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 101.35,156.439 L 106.689,163.143' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 106.689,163.143 L 112.027,169.847' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 105.713,152.965 L 111.052,159.669' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 111.052,159.669 L 116.39,166.373' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 103.532,154.702 L 106.38,147.459' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 106.38,147.459 L 109.229,140.216' style='fill:none;fill-rule:evenodd;stroke:#4284F4;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 123.296,127.318 L 131.997,126.012' style='fill:none;fill-rule:evenodd;stroke:#4284F4;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-23' d='M 107.044,120.345 L 101.705,113.641' style='fill:none;fill-rule:evenodd;stroke:#4284F4;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-23' d='M 101.705,113.641 L 96.3669,106.938' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-10' d='M 149.785,135.252 L 154.235,140.84' style='fill:none;fill-rule:evenodd;stroke:#4284F4;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-10' d='M 154.235,140.84 L 158.685,146.429' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 156.09,145.408 L 152.803,153.767' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 152.803,153.767 L 149.516,162.125' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 161.28,147.449 L 157.993,155.808' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 157.993,155.808 L 154.706,164.166' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 158.685,146.429 L 186.262,142.292' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 186.262,142.292 L 203.633,164.106' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 193.231,142.09 L 205.39,157.36' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-29' d='M 186.262,142.292 L 196.468,116.342' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 203.633,164.106 L 231.21,159.969' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-22' d='M 203.633,164.106 L 193.427,190.057' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 231.21,159.969 L 248.581,181.783' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-30' d='M 231.21,159.969 L 241.416,134.019' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-30' d='M 227.551,154.036 L 234.695,135.87' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-16' d='M 248.581,181.783 L 276.158,177.647' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-17' d='M 276.158,177.647 L 286.364,151.696' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-18' d='M 286.364,151.696 L 281.437,145.509' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-18' d='M 281.437,145.509 L 276.51,139.323' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-19' d='M 259.434,131.316 L 250.425,132.667' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-19' d='M 250.425,132.667 L 241.416,134.019' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-20' d='M 241.416,134.019 L 224.045,112.205' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-21' d='M 224.045,112.205 L 196.468,116.342' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-21' d='M 220.736,118.341 L 201.432,121.236' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-24' d='M 96.3669,106.938 L 74.553,124.308' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-25' d='M 96.3669,106.938 L 118.181,89.5667' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-26' d='M 96.3669,106.938 L 78.996,85.1236' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<text x='117.556' y='182.093' class='atom-8' style='font-size:11px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='110.391' y='134.329' class='atom-9' style='font-size:11px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#4284F4' >N</text>
<text x='137.968' y='130.192' class='atom-10' style='font-size:11px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#4284F4' >N</text>
<text x='145.665' y='130.192' class='atom-10' style='font-size:11px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#4284F4' >H</text>
<text x='145.133' y='177.956' class='atom-12' style='font-size:11px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='265.647' y='135.459' class='atom-19' style='font-size:11px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
</svg>
 data:image/svg+xml;base64,<?xml version='1.0' encoding='iso-8859-1'?>
<svg version='1.1' baseProfile='full'
              xmlns='http://www.w3.org/2000/svg'
                      xmlns:rdkit='http://www.rdkit.org/xml'
                      xmlns:xlink='http://www.w3.org/1999/xlink'
                  xml:space='preserve'
width='85px' height='85px' viewBox='0 0 85 85'>
<!-- END OF HEADER -->
<rect style='opacity:1.0;fill:#FFFFFF;stroke:none' width='85' height='85' x='0' y='0'> </rect>
<path class='bond-0' d='M 7.42372,60.3816 L 10.3154,53.029' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 10.3154,53.029 L 5.39368,46.8484' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 10.8133,51.1175 L 7.36806,46.7911' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-28' d='M 10.3154,53.029 L 18.1289,51.8569' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-2' d='M 5.39368,46.8484 L 8.28538,39.4957' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-3' d='M 8.28538,39.4957 L 3.36364,33.3151' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-4' d='M 8.28538,39.4957 L 16.0988,38.3237' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-4' d='M 9.6918,40.8826 L 15.1612,40.0622' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-5' d='M 16.0988,38.3237 L 21.0206,44.5043' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-6' d='M 21.0206,44.5043 L 28.834,43.3322' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-27' d='M 21.0206,44.5043 L 18.1289,51.8569' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-27' d='M 19.1163,45.0288 L 17.0921,50.1757' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 28.2159,43.8244 L 29.8815,45.916' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 29.8815,45.916 L 31.547,48.0075' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 29.4521,42.8401 L 31.1176,44.9316' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 31.1176,44.9316 L 32.7831,47.0232' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 28.834,43.3322 L 29.6511,41.2547' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 29.6511,41.2547 L 30.4682,39.1771' style='fill:none;fill-rule:evenodd;stroke:#4284F4;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 33.7232,35.6799 L 37.5921,35.0996' style='fill:none;fill-rule:evenodd;stroke:#4284F4;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-23' d='M 30.135,33.9821 L 28.4695,31.8905' style='fill:none;fill-rule:evenodd;stroke:#4284F4;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-23' d='M 28.4695,31.8905 L 26.804,29.799' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-10' d='M 42.0854,38.0051 L 43.2731,39.4966' style='fill:none;fill-rule:evenodd;stroke:#4284F4;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-10' d='M 43.2731,39.4966 L 44.4609,40.9881' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 43.7256,40.699 L 42.6726,43.3765' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 42.6726,43.3765 L 41.6195,46.0541' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 45.1961,41.2773 L 44.1431,43.9549' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 44.1431,43.9549 L 43.09,46.6325' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 44.4609,40.9881 L 52.2743,39.8161' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 52.2743,39.8161 L 57.196,45.9967' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 54.2487,39.7588 L 57.6939,44.0853' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-29' d='M 52.2743,39.8161 L 55.166,32.4634' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 57.196,45.9967 L 65.0095,44.8247' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-22' d='M 57.196,45.9967 L 54.3044,53.3494' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 65.0095,44.8247 L 69.9312,51.0053' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-30' d='M 65.0095,44.8247 L 67.9012,37.472' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-30' d='M 63.9727,43.1434 L 65.9969,37.9966' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-16' d='M 69.9312,51.0053 L 77.7447,49.8332' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-17' d='M 77.7447,49.8332 L 80.6364,42.4806' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-18' d='M 80.6364,42.4806 L 79.1743,40.6445' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-18' d='M 79.1743,40.6445 L 77.7121,38.8084' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-19' d='M 73.7171,36.5996 L 70.8091,37.0358' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-19' d='M 70.8091,37.0358 L 67.9012,37.472' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-20' d='M 67.9012,37.472 L 62.9794,31.2914' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-21' d='M 62.9794,31.2914 L 55.166,32.4634' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-21' d='M 62.0418,33.0299 L 56.5724,33.8503' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-24' d='M 26.804,29.799 L 20.6233,34.7207' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-25' d='M 26.804,29.799 L 32.9846,24.8772' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-26' d='M 26.804,29.799 L 21.8822,23.6184' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<text x='31.9557' y='52.5129' class='atom-8' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='29.9257' y='38.9796' class='atom-9' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#4284F4' >N</text>
<text x='37.7391' y='37.8075' class='atom-10' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#4284F4' >N</text>
<text x='41.8791' y='37.8075' class='atom-10' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#4284F4' >H</text>
<text x='39.7692' y='51.3408' class='atom-12' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='73.9146' y='39.3' class='atom-19' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
</svg>
 CC1=CC(C)=CC(C(=O)N(NC(=O)C=2C(=C3CCCOC3=CC=2)C)C(C)(C)C)=C1 HPNSNYBUADCFDRUHFFFAOYSAN 0.000 claims 1
 210000001308 Heart Ventricles Anatomy 0.000 description 14
 238000004422 calculation algorithm Methods 0.000 description 11
 238000000034 methods Methods 0.000 description 7
 239000000203 mixtures Substances 0.000 description 6
 238000010801 machine learning Methods 0.000 description 4
 230000000717 retained Effects 0.000 description 4
 230000002708 enhancing Effects 0.000 description 3
 238000005457 optimization Methods 0.000 description 3
 238000002604 ultrasonography Methods 0.000 description 3
 240000002694 Clidemia hirta Species 0.000 description 2
 235000014277 Clidemia hirta Nutrition 0.000 description 2
 280000667465 Run Time companies 0.000 description 2
 230000000875 corresponding Effects 0.000 description 2
 238000002059 diagnostic imaging Methods 0.000 description 2
 238000010586 diagrams Methods 0.000 description 2
 238000005516 engineering processes Methods 0.000 description 2
 238000002594 fluoroscopy Methods 0.000 description 2
 238000003384 imaging method Methods 0.000 description 2
 238000003909 pattern recognition Methods 0.000 description 2
 230000004044 response Effects 0.000 description 2
 210000004556 Brain Anatomy 0.000 description 1
 241000124008 Mammalia Species 0.000 description 1
 230000004913 activation Effects 0.000 description 1
 230000002411 adverse Effects 0.000 description 1
 230000006399 behavior Effects 0.000 description 1
 230000015556 catabolic process Effects 0.000 description 1
 239000008264 clouds Substances 0.000 description 1
 230000000295 complement Effects 0.000 description 1
 238000002591 computed tomography Methods 0.000 description 1
 238000010276 construction Methods 0.000 description 1
 230000004059 degradation Effects 0.000 description 1
 238000006731 degradation reactions Methods 0.000 description 1
 238000006073 displacement reactions Methods 0.000 description 1
 238000009472 formulation Methods 0.000 description 1
 230000003993 interaction Effects 0.000 description 1
 230000003278 mimic Effects 0.000 description 1
 230000004048 modification Effects 0.000 description 1
 238000006011 modification reactions Methods 0.000 description 1
 239000002365 multiple layers Substances 0.000 description 1
 210000002569 neurons Anatomy 0.000 description 1
 238000007637 random forest analysis Methods 0.000 description 1
 230000035945 sensitivity Effects 0.000 description 1
 230000001131 transforming Effects 0.000 description 1
 230000000007 visual effect Effects 0.000 description 1
Images
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/08—Learning methods

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
 G06K9/46—Extraction of features or characteristics of the image
 G06K9/4604—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections
 G06K9/4609—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections by matching or filtering

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/00362—Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
 G06K9/46—Extraction of features or characteristics of the image
 G06K9/4604—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections
 G06K9/4609—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections by matching or filtering
 G06K9/4619—Biologicallyinspired filters, e.g. receptive fields
 G06K9/4623—Biologicallyinspired filters, e.g. receptive fields with interaction between the responses of different filters
 G06K9/4628—Integrating the filters into a hierarchical structure

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/62—Methods or arrangements for recognition using electronic means
 G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
 G06K9/6256—Obtaining sets of training patterns; Bootstrap methods, e.g. bagging, boosting

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/62—Methods or arrangements for recognition using electronic means
 G06K9/6267—Classification techniques
 G06K9/6279—Classification techniques relating to the number of classes
 G06K9/6284—Single class perspective, e.g. oneagainstall classification; Novelty detection; Outlier detection

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/04—Architectures, e.g. interconnection topology

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/04—Architectures, e.g. interconnection topology
 G06N3/0445—Feedback networks, e.g. hopfield nets, associative networks

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/08—Learning methods
 G06N3/082—Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/0002—Inspection of images, e.g. flaw detection
 G06T7/0012—Biomedical image inspection

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K2209/00—Indexing scheme relating to methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K2209/05—Recognition of patterns in medical or anatomical images

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/04—Architectures, e.g. interconnection topology
 G06N3/0454—Architectures, e.g. interconnection topology using a combination of multiple neural nets

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/08—Learning methods
 G06N3/084—Backpropagation

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2207/00—Indexing scheme for image analysis or image enhancement
 G06T2207/20—Special algorithmic details
 G06T2207/20084—Artificial neural networks [ANN]
Abstract
Description
W(m,n)=Σ_{k}Σ_{l}Φ_{k,l}(m,n)Y(k,l) (1)
which reconstructs the neural network weights by wavelet coefficients Y. This can be expressed in 1D wavelet bases as:
W(m,n)=Σ_{k}Σ_{l}Φ_{k}(m)Y(k,l)Φ_{l}(n). (2)
In an advantageous implementation, Haar wavelet bases are used. For example, 4×4 Haar wavelet bases can be expressed as:
It is to be understood that the present invention is not limited to the wavelet bases shown in Equation (3), and other sizes of the wavelet bases can be used as well.
Accordingly, the Frobenius inner product P:W=Σ_{m}Σ_{n}P(m,n)W(m,n) is approximated as the inner product of Y and Φ^{T}PΦ.
P:W _{i}=α_{i,1} U _{1} :P+ . . . +α _{i,K} U _{K} :P (5)
If K is smaller than the number of hidden layer units H, then the K values U_{1}:P, . . . , U_{K}:P can be computed much faster than P:W_{1}, . . . , P:W_{H}, and therefore achieve a speed up in computing P:W over all hidden layer units. The PCA approximation can be combined with the Haar wavelet analysis by applying PCA to the space of the Haar wavelet coefficients {Y} to obtain an additional speed up in computing Y:Φ^{T}PΦ over all hidden layer units of a hidden layer.
h ^{(l)}=ƒ(W ^{(l)} x+b ^{(l)}) (6)
where ƒ is a nonlinear rectification function like sigmoid function. The training of a deep neural network, such as a stacked denoising autoencode can be performed based on stochastic gradient descent of a cost function measured as the Euclidean distance between predicted outcomes and the observations in the training data. In an ideal world, each node in the network should extract different pieces of information from the input image data so that the combination of nodes yields an accurate and robust prediction for the landmark location. However, there is no explicit constraint to prevent different nodes from learning the same thing. Moreover, due to the highly complex and nonconvex nature of the optimization procedure used to train the deep neural network, the trained deep neural network will likely contain significant redundancy.
h _{i} ^{(l)}(x)=ƒ(Σ_{jεS} _{ i } _{ l } W _{ij} ^{(l)} x _{j} +b _{i} ^{(l)}) (7)
where S_{i} ^{l }is the indexed set of retained connections of the ith filter at llayer. The smaller the set S_{i} ^{l}, the greater the speed up that can be achieved, at the cost of a stronger perturbation introduced to the network. In an advantageous implementation, once the reduced set of coefficients is determined for each filter, the deep neural network can be refined using supervised backpropagation to alleviate the effects introduced by the perturbation. In this refinement step, only the active coefficients are updated, while the coefficients set to zero remain at zero.
TABLE 1  
Network  Error (%)  
Original (nonsparse)  3.09  
20% nonzero (5 times speedup)  3.09  
10% nonzero (10 times speedup)  3.09  
5% nonzero (20 times speedup)  4.07  
1% nonzero (100 times speedup)  11.05  
As shown in Table 1, it is possible to eliminate a significant portion (e.g., 95%) of each filter's coefficients (weights) without losing much in accuracy.
Σ_{l}∥Γ^{(l)} ★W ^{(l)}∥_{1} (8)
where ★ denotes the elementwise multiplication and Γ is a matrix whose coefficients are computed as:
This reweighting scheme reduces the effect of the L1norm magnitude of the objective function by multiplying each coefficient in the L1norm with a term approximating the inverse of its magnitude. The reweighting of the L1norm makes the regularization look more like L0norm regularization, and drives a large number of weights that are less relevant to the final classification result to zero. Once the reweighted L1norm minimization is performed using backpropagation, backpropagation can be performed again using stochastic gradient descent (i.e., with the original cost function) to refine the remaining nonzero coefficients.
a ^{(l)}(x)=M ^{(l)} a _{S} ^{l}(x),∀x. (10)
The left hand side a^{(l)}(x) is the output at a specific layer l. On the right hand side, an equivalent output is obtained by linearly combining a small set of functions (nodes) a_{S} ^{(l)}(x), indicated by index set S, with a mixing matrix M. In practice, it is sufficient to satisfy this condition for a finite set of training data samples x: If this condition is met for the finite set of training samples, then the subset of functions A_{S} ^{l }can be used to reproduce the outputs for all functions A^{l }in the original trained layer for any input, such that:
A ^{l} =M ^{l} A _{S} ^{l}. (11)
The above condition may not be perfectly satisfied due to various noises and artifacts that are often present in the image data. However, a subset of functions (nodes) that approximately satisfies the condition can be identified. In an advantageous implementation, the subset can be identified by solving the following optimization problem:
where the columnsparse constraint, expressed as the quasinorm ∥M∥_{col0}, enforces the selection of a small subset of functions that linearly approximate all output functions. The union of indices of nonzero columns in the matrix M identified by solving the optimization problem is equal to the set S the we are trying to identify. Greedy algorithms, such as simultaneous orthogonal matching pursuit, can be used to optimize the above cost function, and thus identify the subset of nodes that can be used to represent the entire set of nodes for a particular layer and also calculate the mixing matrix M used with the subset of nodes to approximate all of the output functions for that layer.
W ^{l+1} ←W ^{l+1} M ^{l} _{columnεS} _{ l }
W ^{l} ←W _{rowεS} _{ l } ^{l} (13)
The matrix M expresses the linear dependence of each output function (node) to the selected subset of functions (nodes). The matrix M^{l} _{columnεS} _{ l }is the matrix formed by columns of M^{l }whose indices are in S^{l}. The matrix W_{rowεS} _{ l } ^{l }is the matrix formed by rows of W^{l }whose indices are in S^{l}.
TABLE 2  
SdA Network Size  Error (%)  
102410243001002 (original)  3.09  
1024340100352 (simplified)  3.27  
102420050202 (simplified)  4.29  
1024340100352 (from scratch)  3.82  
102420050202 (from scratch)  7.16  
As shown in Table 2, the degradation in classification accuracy is considerably small between the simplified networks and the original trained deep neural network. It can also be observed in Table 2 that the smaller sized networks trained from scratch (i.e., unsupervised pretraining followed by supervised refinement) perform worse than the simplified networks.
Claims (33)
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

US14/706,108 US9633306B2 (en)  20150507  20150507  Method and system for approximating deep neural networks for anatomical object detection 
Applications Claiming Priority (4)
Application Number  Priority Date  Filing Date  Title 

US14/706,108 US9633306B2 (en)  20150507  20150507  Method and system for approximating deep neural networks for anatomical object detection 
EP16167707.5A EP3091486A3 (en)  20150507  20160429  Method and system for approximating deep neural networks for anatomical object detection 
CN201610296717.4A CN106127217B (en)  20150507  20160506  The method and system for going deep into neural network for approximation to detect for anatomical object 
CN201910342258.2A CN110175630A (en)  20150507  20160506  The method and system for going deep into neural network for approximation to detect for anatomical object 
Publications (2)
Publication Number  Publication Date 

US20160328643A1 US20160328643A1 (en)  20161110 
US9633306B2 true US9633306B2 (en)  20170425 
Family
ID=55910782
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

US14/706,108 Active 20350616 US9633306B2 (en)  20150507  20150507  Method and system for approximating deep neural networks for anatomical object detection 
Country Status (3)
Country  Link 

US (1)  US9633306B2 (en) 
EP (1)  EP3091486A3 (en) 
CN (2)  CN110175630A (en) 
Cited By (6)
Publication number  Priority date  Publication date  Assignee  Title 

US20170213321A1 (en) *  20160122  20170727  Siemens Healthcare Gmbh  Deep Unfolding Algorithm For Efficient Image Denoising Under Varying Noise Conditions 
WO2018207146A1 (en) *  20170512  20181115  Tenstorrent Inc.  Processing core operation suppression based on contribution estimate 
US10565686B2 (en)  20170612  20200218  Nvidia Corporation  Systems and methods for training neural networks for regression without ground truth training samples 
US10657446B2 (en)  20170602  20200519  Mitsubishi Electric Research Laboratories, Inc.  Sparsity enforcing neural network 
US10885659B2 (en)  20180115  20210105  Samsung Electronics Co., Ltd.  Object pose estimating method and apparatus 
US10902302B2 (en)  20180423  20210126  International Business Machines Corporation  Stacked neural network framework in the internet of things 
Families Citing this family (56)
Publication number  Priority date  Publication date  Assignee  Title 

CN106170246A (en)  20140117  20161130  阿特瑞斯公司  For fourdimensional (4D) stream equipment of nuclear magnetic resonance, method and product 
CN107438866B (en) *  20150513  20201201  谷歌公司  Depth stereo: learning to predict new views from real world imagery 
US10410096B2 (en) *  20150709  20190910  Qualcomm Incorporated  Contextbased priors for object detection in images 
US10529318B2 (en) *  20150731  20200107  International Business Machines Corporation  Implementing a classification model for recognition processing 
US9769367B2 (en)  20150807  20170919  Google Inc.  Speech and computer visionbased control 
US10467528B2 (en) *  20150811  20191105  Oracle International Corporation  Accelerated TRLBFGS algorithm for neural network 
US10290040B1 (en) *  20150916  20190514  Amazon Technologies, Inc.  Discovering crosscategory latent features 
CN108603922A (en)  20151129  20180928  阿特瑞斯公司  Automatic cardiac volume is divided 
US9836819B1 (en)  20151230  20171205  Google Llc  Systems and methods for selective retention and editing of images captured by mobile image capture device 
US9838641B1 (en)  20151230  20171205  Google Llc  Low power framework for processing, compressing, and transmitting images at a mobile image capture device 
US10732809B2 (en)  20151230  20200804  Google Llc  Systems and methods for selective retention and editing of images captured by mobile image capture device 
US10225511B1 (en)  20151230  20190305  Google Llc  Low power framework for controlling image sensor mode in a mobile image capture device 
US9836484B1 (en) *  20151230  20171205  Google Llc  Systems and methods that leverage deep learning to selectively store images at a mobile image capture device 
US9760807B2 (en)  20160108  20170912  Siemens Healthcare Gmbh  Deep imagetoimage network learning for medical image analysis 
US20170221204A1 (en) *  20160128  20170803  Siemens Medical Solutions Usa, Inc.  Overlay Of Findings On Image Data 
US10867142B2 (en) *  20160629  20201215  Intel Corporation  Multiplicationfree approximation for neural networks and sparse coding 
MX2018015394A (en)  20160708  20190422  Avent Inc  System and method for automatic detection, localization, and semantic segmentation of anatomical objects. 
US10140979B2 (en) *  20160810  20181127  Conduent Business Services, Llc  Modeling a class posterior probability of context dependent phonemes in a speech recognition system 
US10832123B2 (en) *  20160812  20201110  Xilinx Technology Beijing Limited  Compression of deep neural networks with proper use of mask 
US20180060728A1 (en) *  20160831  20180301  Microsoft Technology Licensing, Llc  Deep Embedding Forest: Forestbased Serving with Deep Embedding Features 
WO2018084577A1 (en) *  20161103  20180511  Samsung Electronics Co., Ltd.  Data recognition model construction apparatus and method for constructing data recognition model thereof, and data recognition apparatus and method for recognizing data thereof 
AU2017268489B1 (en) *  20161202  20180517  Avent, Inc.  System and method for navigation to a target anatomical object in medical imagingbased procedures 
WO2018101985A1 (en) *  20161202  20180607  Avent, Inc.  System and method for navigation to a target anatomical object in medical imagingbased procedures 
US10985777B2 (en)  20161209  20210420  William Marsh Rice University  Signal recovery via deep convolutional networks 
EP3561736A4 (en) *  20161220  20200909  Shanghai Cambricon Information Technology Co., Ltd  Multiplication and addition device for matrices, neural network computing device, and method 
EP3570222A4 (en) *  20170112  20200205  KDDI Corporation  Information processing device and method, and computer readable storage medium 
WO2018140596A2 (en) *  20170127  20180802  Arterys Inc.  Automated segmentation utilizing fully convolutional networks 
US20180225822A1 (en) *  20170208  20180809  Siemens Healthcare Gmbh  Hierarchical Learning of Weights of a Neural Network for Performing Multiple Analyses 
US10636141B2 (en)  20170209  20200428  Siemens Healthcare Gmbh  Adversarial and dual inverse deep learning networks for medical image analysis 
US10713785B2 (en) *  20170213  20200714  Siemens Healthcare Gmbh  Image quality assessment system and method 
US10546242B2 (en)  20170303  20200128  General Electric Company  Image analysis neural network systems 
US10133964B2 (en)  20170328  20181120  Siemens Healthcare Gmbh  Magnetic resonance image reconstruction system and method 
US10127495B1 (en) *  20170414  20181113  Rohan Bopardikar  Reducing the size of a neural network through reduction of the weight matrices 
EP3631690A4 (en) *  20170523  20210331  Intel Corporation  Methods and apparatus for enhancing a neural network using binary tensor and scale factor pairs 
CN110574044A (en) *  20170523  20191213  英特尔公司  Method and apparatus for enhancing binary weighted neural networks using dependency trees 
US20210150779A1 (en) *  20170619  20210520  Washington University  Deep learningassisted image reconstruction for tomographic imaging 
WO2019090325A1 (en)  20171106  20190509  Neuralmagic, Inc.  Methods and systems for improved transforms in convolutional neural networks 
WO2019097749A1 (en) *  20171116  20190523  Mitsubishi Electric Corporation  Computerbased system and computerbased method 
US20200408929A1 (en) *  20171208  20201231  Rensselaer Polytechnic Institute  A neural networkbased corrector for photon counting detectors 
US10482600B2 (en)  20180116  20191119  Siemens Healthcare Gmbh  Crossdomain image analysis and crossdomain image synthesis using deep imagetoimage networks and adversarial networks 
WO2019152308A1 (en) *  20180130  20190808  D5Ai Llc  Selforganizing partially ordered networks 
US10832133B2 (en)  20180531  20201110  Neuralmagic Inc.  System and method of executing neural networks 
US10963787B2 (en)  20180531  20210330  Neuralmagic Inc.  Systems and methods for generation of sparse code for convolutional neural networks 
CN108846344A (en) *  20180605  20181120  中南大学  A kind of pedestrian's posture multiple features INTELLIGENT IDENTIFICATION method merging deep learning 
US10776923B2 (en) *  20180621  20200915  International Business Machines Corporation  Segmenting irregular shapes in images using deep region growing 
US10643092B2 (en)  20180621  20200505  International Business Machines Corporation  Segmenting irregular shapes in images using deep region growing with an image pyramid 
US10671891B2 (en) *  20180719  20200602  International Business Machines Corporation  Reducing computational costs of deep reinforcement learning by gated convolutional neural network 
KR101952887B1 (en) *  20180727  20190611  김예현  Method for predicting anatomical landmarks and device for predicting anatomical landmarks using the same 
CN109344958A (en) *  20180816  20190215  北京师范大学  Object identification method and identifying system based on feedback regulation 
CN109036412A (en) *  20180917  20181218  苏州奇梦者网络科技有限公司  voice awakening method and system 
US20200104717A1 (en) *  20181001  20200402  Neuralmagic Inc.  Systems and methods for neural network pruning with accuracy preservation 
US20200258216A1 (en) *  20190213  20200813  Siemens Healthcare Gmbh  Continuous learning for automatic view planning for image acquisition 
SG11202104263QA (en) *  20190719  20210528  Shenzhen Sensetime Technology Co Ltd  Batch normalization data processing method and apparatus, electronic device, and storage medium 
US20210064985A1 (en) *  20190903  20210304  International Business Machines Corporation  Machine learning hardware having reduced precision parameter components for efficient parameter update 
CN110613445A (en) *  20190925  20191227  西安邮电大学  DWNN frameworkbased electrocardiosignal identification method 
CN110633714A (en) *  20190925  20191231  山东师范大学  VGG image feature extraction acceleration method and system based on approximate calculation 
Citations (13)
Publication number  Priority date  Publication date  Assignee  Title 

US20050169529A1 (en) *  20040203  20050804  Yuri Owechko  Active learning system for object fingerprinting 
US7783459B2 (en) *  20070221  20100824  William Marsh Rice University  Analog system for computing sparse codes 
US20110218950A1 (en) *  20080602  20110908  New York University  Method, system, and computeraccessible medium for classification of at least one ictal state 
US20130023715A1 (en) *  20110121  20130124  Headwater Partners Ii Llc  Tracking of tumor location for targeted radiation treatment 
US20130138436A1 (en)  20111126  20130530  Microsoft Corporation  Discriminative pretraining of deep neural networks 
US20130138589A1 (en) *  20111128  20130530  Microsoft Corporation  Exploiting sparseness in training deep neural networks 
US20130177235A1 (en) *  20120105  20130711  Philip Meier  Evaluation of ThreeDimensional Scenes Using TwoDimensional Representations 
US20150112182A1 (en)  20131017  20150423  Siemens Aktiengesellschaft  Method and System for Machine Learning Based Assessment of Fractional Flow Reserve 
US20150125049A1 (en)  20131104  20150507  Facebook, Inc.  Systems and methods for facial representation 
US20150161988A1 (en)  20131206  20150611  International Business Machines Corporation  Systems and methods for combining stochastic average gradient and hessianfree optimization for sequence training of deep neural networks 
US20150161987A1 (en)  20131206  20150611  International Business Machines Corporation  Systems and methods for accelerating hessianfree optimization for deep neural networks by implicit preconditioning and sampling 
US20150170002A1 (en)  20130531  20150618  Google Inc.  Object detection using deep neural networks 
US20160180200A1 (en) *  20141219  20160623  Google Inc.  Largescale classification in neural networks using hashing 
Family Cites Families (2)
Publication number  Priority date  Publication date  Assignee  Title 

CN104134062A (en) *  20140818  20141105  朱毅  Vein recognition system based on depth neural network 
CN104361328B (en) *  20141121  20181102  重庆中科云丛科技有限公司  A kind of facial image normalization method based on adaptive multiple row depth model 

2015
 20150507 US US14/706,108 patent/US9633306B2/en active Active

2016
 20160429 EP EP16167707.5A patent/EP3091486A3/en active Pending
 20160506 CN CN201910342258.2A patent/CN110175630A/en active Pending
 20160506 CN CN201610296717.4A patent/CN106127217B/en active Active
Patent Citations (14)
Publication number  Priority date  Publication date  Assignee  Title 

US20050169529A1 (en) *  20040203  20050804  Yuri Owechko  Active learning system for object fingerprinting 
US7783459B2 (en) *  20070221  20100824  William Marsh Rice University  Analog system for computing sparse codes 
US20110218950A1 (en) *  20080602  20110908  New York University  Method, system, and computeraccessible medium for classification of at least one ictal state 
US20130023715A1 (en) *  20110121  20130124  Headwater Partners Ii Llc  Tracking of tumor location for targeted radiation treatment 
US20130138436A1 (en)  20111126  20130530  Microsoft Corporation  Discriminative pretraining of deep neural networks 
US20130138589A1 (en) *  20111128  20130530  Microsoft Corporation  Exploiting sparseness in training deep neural networks 
US8700552B2 (en)  20111128  20140415  Microsoft Corporation  Exploiting sparseness in training deep neural networks 
US20130177235A1 (en) *  20120105  20130711  Philip Meier  Evaluation of ThreeDimensional Scenes Using TwoDimensional Representations 
US20150170002A1 (en)  20130531  20150618  Google Inc.  Object detection using deep neural networks 
US20150112182A1 (en)  20131017  20150423  Siemens Aktiengesellschaft  Method and System for Machine Learning Based Assessment of Fractional Flow Reserve 
US20150125049A1 (en)  20131104  20150507  Facebook, Inc.  Systems and methods for facial representation 
US20150161988A1 (en)  20131206  20150611  International Business Machines Corporation  Systems and methods for combining stochastic average gradient and hessianfree optimization for sequence training of deep neural networks 
US20150161987A1 (en)  20131206  20150611  International Business Machines Corporation  Systems and methods for accelerating hessianfree optimization for deep neural networks by implicit preconditioning and sampling 
US20160180200A1 (en) *  20141219  20160623  Google Inc.  Largescale classification in neural networks using hashing 
NonPatent Citations (7)
Title 

Cheng Y. et al: "Fast neural networks with circulant projections", arXiv:1502.03436v1 [cs.CV], XP055210210, Retrieved from the Internet: URL:http: //arxiv.org/ abs ; Feb. 11, 2015. 
De Brebisson A. et al: "Deep Neural Networks for Anatomical Brain Segmentation"; XP55337395, Retrieved from the Internet: URL:https://arxiv.org/pdf/1502.02445vl.pdf / Feb. 9, 2015. 
Denton E. et al: "Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation", pp. 116, , Retrieved from the Internet; Apr. 2, 2014. 
Gong Y. et al: "Compressing Deep Convolutional Networks using Vector Quantization" ,pp. 110, XP055262159, Retrieved from the Internet; UDec. 18, 214. 
He Tianxing et al: "Reshaping deep neural network for fast decoding by nodepruning"; 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp. 245249; May 4, 2014. 
Xu et al, "Deep Learning of Feature Representation With Multiple Instance Learning for Medical Image Analysis," 2014, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5 pages. * 
Zhang et al: "Efficient and Accurate Approximations of Nonlinear Convolutional Networks"; XP055337086, Retrieved from the Internet: URL:https://arxiv.org/pdf/1411.4229vl.pdf; Nov. 16, 2014. 
Cited By (9)
Publication number  Priority date  Publication date  Assignee  Title 

US20170213321A1 (en) *  20160122  20170727  Siemens Healthcare Gmbh  Deep Unfolding Algorithm For Efficient Image Denoising Under Varying Noise Conditions 
US10043243B2 (en) *  20160122  20180807  Siemens Healthcare Gmbh  Deep unfolding algorithm for efficient image denoising under varying noise conditions 
WO2018207146A1 (en) *  20170512  20181115  Tenstorrent Inc.  Processing core operation suppression based on contribution estimate 
US10318317B2 (en)  20170512  20190611  Tenstorrent Inc.  Processing core with operation suppression based on contribution estimate 
US10585679B2 (en)  20170512  20200310  Tenstorrent Inc.  Processing core with operation suppression based on contribution estimate 
US10657446B2 (en)  20170602  20200519  Mitsubishi Electric Research Laboratories, Inc.  Sparsity enforcing neural network 
US10565686B2 (en)  20170612  20200218  Nvidia Corporation  Systems and methods for training neural networks for regression without ground truth training samples 
US10885659B2 (en)  20180115  20210105  Samsung Electronics Co., Ltd.  Object pose estimating method and apparatus 
US10902302B2 (en)  20180423  20210126  International Business Machines Corporation  Stacked neural network framework in the internet of things 
Also Published As
Publication number  Publication date 

EP3091486A3 (en)  20170322 
CN106127217B (en)  20191105 
US20160328643A1 (en)  20161110 
CN110175630A (en)  20190827 
EP3091486A2 (en)  20161109 
CN106127217A (en)  20161116 
Similar Documents
Publication  Publication Date  Title 

US9633306B2 (en)  Method and system for approximating deep neural networks for anatomical object detection  
US9730643B2 (en)  Method and system for anatomical object detection using marginal space deep neural networks  
US9668699B2 (en)  Method and system for anatomical object detection using marginal space deep neural networks  
Xue et al.  Segan: Adversarial network with multiscale l 1 loss for medical image segmentation  
Andermatt et al.  Multidimensional gated recurrent units for the segmentation of biomedical 3Ddata  
Xie et al.  Learning descriptor networks for 3d shape synthesis and analysis  
Sironi et al.  Learning separable filters  
Tang et al.  Deep networks for robust visual recognition  
Krebs et al.  Unsupervised probabilistic deformation modeling for robust diffeomorphic registration  
US10467495B2 (en)  Method and system for landmark detection in medical images using deep neural networks  
EP3093821A1 (en)  Method and system for anatomical object pose detection using marginal space deep neural networks  
CN110335261B (en)  CT lymph node detection system based on spacetime circulation attention mechanism  
Ye et al.  Deep residual learning for modelbased iterative ct reconstruction using plugandplay framework  
Chang et al.  Brain MR image restoration using an automatic trilateral filter with GPUbased acceleration  
Wu et al.  Endtoend abnormality detection in medical imaging  
Sureau et al.  Deep learning for a spacevariant deconvolution in galaxy surveys  
Nath et al.  Diminishing Uncertainty within the Training Pool: Active Learning for Medical Image Segmentation  
Ma et al.  Learning image from projection: a fullautomatic reconstruction (far) net for sparseviews computed tomography  
Leong et al.  Multiple 3D farfield/nearfield moving target localization using wideband echo chirp signals  
Jarosik et al.  The feasibility of deep learning algorithms integration on a GPUbased ultrasound research scanner  
Avetisian  Volumetric Medical Image Segmentation with Deep Convolutional Neural Networks.  
Lohne  Parseval Reconstruction Networks  
CN113256592B (en)  Training method, system and device of image feature extraction model  
Johansen et al.  Medical image segmentation: A general unet architecture and novel capsule network approaches  
Li et al.  Deep Algorithm Unrolling for Biomedical Imaging 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: SIEMENS CORPORATION, NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COMANICIU, DORIN;GEORGESCU, BOGDAN;LAY, NATHAN;AND OTHERS;SIGNING DATES FROM 20160429 TO 20160729;REEL/FRAME:040216/0726 

AS  Assignment 
Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS CORPORATION;REEL/FRAME:040224/0443 Effective date: 20161031 Owner name: SIEMENS HEALTHCARE GMBH, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS AKTIENGESELLSCHAFT;REEL/FRAME:040224/0627 Effective date: 20161104 

AS  Assignment 
Owner name: SIEMENS CORPORATION, NEW JERSEY Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE OMISSION OF ASSIGNOR NAME PREVIOUSLY RECORDED AT REEL: 040216 FRAME: 0726. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:COMANICIU, DORIN;GEORGESCU, BOGDAN;KRETSCHMER, JAN;AND OTHERS;SIGNING DATES FROM 20160429 TO 20160801;REEL/FRAME:040569/0876 

STCF  Information on status: patent grant 
Free format text: PATENTED CASE 

MAFP  Maintenance fee payment 
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 