Simple Project List Software Download Map

Distributed Computing
173 projects in result set
Promedio
0.0
0 total
Descargar
Última actualización: 2017-04-27 23:21

Diskless Remote Boot in Linux (DRBL)

DRBL provides diskless or systemless environment. It uses distributed hardware resources and makes it possible for clients to fully access local hardware. It also includes Clonezilla, a partition and disk cloning utility similar to Ghost.

Estado de desarrollo: 5 - Production/Stable
Público destinatario: Developers, System Administrators, Education
Lenguaje Natural: Chinese (Traditional), English
Sistema Operativo: Linux
Lenguaje de Programación: C, Perl, Unix Shell
User Interface: X11 Applications
Percentil actividad: 24
Activity Ranking: 105Ranking
Register Date: 2016-03-20 12:41
Promedio
4.7
34 total
Descargar
Última actualización: 2016-08-09 20:26

Xming X Server for Windows

Xming is the leading X Window System Server for Microsoft Windows 8/7/Vista/XP (+ server 2012/2008/2003). It is fully featured, small and fast, simple to install and because it is standalone native Microsoft Windows, easily made portable (not needing a machine-specific installation).

Promedio
0.0
0 total
Descargar
Última actualización: 2014-06-03 08:35

JPPF

JPPF makes it easy to parallelize computationally intensive tasks and execute them on a Grid.

Promedio
0.0
0 total
Descargar
Última actualización: 2015-06-14 02:35

hadoop for windows

unofficial prebuild binary packages of apache hadoop for windows, apache hive for windows, apache spark for windows, apache drill for windows and azkaban for windows.


Windows で動作する Apache Hadoop の非公式のビルド済...

Estado de desarrollo: 2 - Pre-Alpha
Público destinatario: Science/Research
Sistema Operativo: MinGW/MSYS (MS Windows), Windows 7
Lenguaje de Programación: Java
Register Date: 2015-02-22 06:32
Promedio
0.0
0 total
Descargar
Última actualización: 2013-07-29 22:58

Makeflow

Makeflow is a workflow engine for executing large complex applications on clusters, clouds, and grids. It can be used to drive several different distributed computing systems, including Condor, SGE, and the included Work Queue system. It does not require a distributed filesystem, so you can use it to harness whatever collection of machines you have available. It is typically used for scaling up data-intensive scientific applications to hundreds or thousands of cores.

Promedio
0.0
0 total
Descargar
Última actualización: 2012-11-06 23:43

Shared Scientific Toolbox in Java

The Shared Scientific Toolbox is a library that facilitates development of efficient, modular, and robust scientific/distributed computing applications in Java. It features multidimensional arrays with extensive linear algebra and FFT support, an asynchronous, scalable networking layer, and advanced class loading, message passing, and statistics packages.

Promedio
5.0
1 total
Descargar
Última actualización: 2017-05-12 22:51

Talend Open Studio for Data Integration

Talend provides integration that truly scales. From small projects to enterprise-wide implementations, Talend’s highly scalable data, application and business process integration platform maximizes information assets and development skillsets. Ready for big data, Talend’s flexible architecture adapts to future IT platforms. And Talend’s predictable subscription-based model guarantees that value scales, too.

Promedio
1.0
1 total
Descargar
Última actualización: 2011-03-22 04:39

Dapper Dataflow Engine

Dapper, or "Distributed and Parallel Program Execution Runtime", is a tool for taming the complexities of developing for large-scale cloud and grid computing, enabling the user to create distributed computations from the essentials: the code that will execute, along with a dataflow graph description. It supports rich execution semantics, carefree deployment, a robust control protocol, modification of the dataflow graph at runtime, and an intuitive user interface.

Promedio
0.0
0 total
Descargar
Última actualización: 2013-07-29 22:54

Parrot and Chirp

Parrot and Chirp are user-level tools that make it easy to rapidly deploy wide area filesystems. Parrot is the client component: it transparently attaches to unmodified applications, and redirects their system calls to various remote servers. A variety of controls can be applied to modify the namespace and resources available to the application. Chirp is the server component: it allows an ordinary user to easily export and share storage across the wide area with a single command. A rich access control system allows users to mix and match multiple authentication types. Parrot and Chirp are most useful in the context of large scale distributed systems such as clusters, clouds, and grids where one may have limited permissions to install software.

Promedio
0.0
0 total
Descargar
Última actualización: 2010-12-14 19:35

StarCluster

StarCluster is a utility for creating traditional computing clusters used in research labs or for general distributed computing applications on Amazon's Elastic Compute Cloud (EC2). It uses a simple configuration file provided by the user to request cloud resources from Amazon and to automatically configure them with a queuing system, an NFS shared /home directory, passwordless SSH, OpenMPI, and ~140GB scratch disk space. It consists of a Python library and a simple command line interface to the library. For end-users, the command line interface provides simple intuitive options for getting started with distributed computing on EC2 (i.e. starting/stopping clusters, managing AMIs, etc). For developers, the library wraps the EC2 API to provide a simplified interface for launching/terminating nodes, executing commands on the nodes, copying files to/from the nodes, etc.

Promedio
5.0
1 total
Descargar
Última actualización: 2010-02-18 15:28

jmemcached

jmemcached is a fast network available cache daemon. It is protocol-compatible with memcached, but written in Java and suitable for applications with portability concerns, where Java is the preferred solution, or for using the memcached protocol in embedded applications with alternate storage engines. Existing clients for memcache work unmodified. It can run as a standalone daemon or be embedded inside an existing Java application.

Promedio
0.0
0 total
Descargar
Última actualización: 2010-06-17 07:58

DAC

DAC (Dynamic Agent Computations) is a novel software framework designed for implementing multi-agent systems that describe parallel computations. The whole system is easy to configure and extend, but also very efficient and scalable. Moreover, the technology that is used (JMS, Cajo, JMX) ensures high reliability of the framework, which can be used in a production environment.

Promedio
0.0
0 total
Descargar
Última actualización: 2014-06-07 03:10

magic.jar

magic.jar is a command line tool that allows you to execute the mobile, sandboxed Lua snippets available on TinyBrain.de on any machine. It can do text operations and display GUIs.

(Machine Translation)
Promedio
0.0
0 total
Descargar
Última actualización: 2010-05-05 09:09

XtreemOS

The overall objective of the XtreemOS project is the design, implementation, evaluation, and distribution of a grid operating system (called XtreemOS) with native support for virtual organizations (VO). XtreemOS is capable of running on a wide range of underlying platforms, from clusters to mobiles. It is based on Mandriva Linux, with support to come for other distributions later.

Promedio
0.0
0 total
Descargar
Última actualización: 2012-10-25 00:40

dispy

dispy is a Python framework for parallel execution of computations by distributing them across multiple processors in a single machine (SMP), or among many machines in a cluster or grid. The computations can be standalone programs or Python functions. dispy is well suited for the data parallel (SIMD) paradigm where a computation is evaluated with different (large) datasets independently (similar to Hadoop, MapReduce, Parallel Python). dispy features include automatic distribution of dependencies (files, Python functions, classes, modules), client-side and server-side fault recovery, scheduling of computations to specific nodes, encryption for security, sharing of computation resources if desired, and more.