Ticket #840 (closed wish: wontfix)

Opened 3 years ago

Last modified 3 years ago

Implement GPU accelerated learning algorithms

Reported by: gw0 Owned by: janez
Milestone: Future Component: library
Severity: minor Keywords: gpu, opencl, cuda
Cc: Blocking:
Blocked By:


Graphic cards are becoming more and more powerful and can nowadays compute almost any parallelizable work ultra fast, like a supercomputer. Therefore it would be nice if Orange would catch up with the current trend of GPU computing and consequently became much much faster.

As far as I see there are currently two technologies used for GPU computing AMD/ATI use only  OpenCL and Nvidia is more optimized for  CUDA. There are Python libraries for both of them.

For Orange this means that the most commonly used and parallelizable learning algorithms would need to be rewritten for GPUs.

Change History

comment:1 Changed 3 years ago by janez

  • Status changed from new to closed
  • Resolution set to wontfix

Our tests have shown GPU to be rather useless even for massively parallel tasks from our field, such as blast. Please start discussion on the forum and let's reopen the ticket if there are any serious proponents (including volunteers who are willing to invest their (and not others ;) time).

Note: See TracTickets for help on using tickets.