Key Features

Pipeline Creation

The execution order of cells is critical in JupyterLab because the results change accordingly. Despite this, users cannot specify the cell execution order or dependencies among cells. Therefore, there is no guarantee that the same results will be obtained every time.

Link enables users to automatically identify the execution sequence by specifying the relationships among the cells. Users can create a pipeline on the same screen as the code to improve readability. Additional features such as execution option, comments, header color setting, saving and sharing components are available for convenience and simplicity.

Example of pipeline creation

Example of pipeline creation

Caching Management

Adding, modifying or deleting code cells in JupyterLab requires constant monitoring of the cell outputs and often entails rerunning a large part of the code cells. If the JupyterLab kernel is restarted, users may have to rerun all of the cells from the beginning. These repetitive processes result in duplicate work and lost productivity.

To address these shortcomings, Link caches the results of the cells that have already been executed, eliminating the need to re-execute successfully run cells. Link also enables data scientists to export, import and share cached results to resume the process at the exact point where collaborators left off.

Example of Caching

Example of Caching

Remote Resources

Using JupyterLab with external resources, such as GPU, usually requires users to run all tasks on the server. This limits the flexibility of the development environment and the efficiency of the use of server resources.

To solve this problem, Link enables users to execute pipeline with remote servers. Server resources can be efficiently utilized by assigning each pipeline component to a different remote server. In addition, users can remotely execute only the code they want at the time of their choice without the need for external libraries. Therefore, a flexible development environment can also be built.


Hyper-Parameter Optimizer

In machine learning, a hyper-parameter is a parameter that must be set by the user. Optimizing all hyper-parameter values manually requires a high level of trial and error, and accuracy cannot be guaranteed.

In order to address this issue, Link provides Hyper-Parameter Optimizer, which works to find optimal hyper-parameters in an automated manner. Users may also view changes in target values in real-time and apply optimized hyper-parameters even when there are multiple target values.

Example of Hyper-Parameter Optimizer

Example of Hyper-Parameter Optimizer

Version Control

The lack of a version control system in JupyterLab makes it difficult for data scientists to track file deletions and incorrect modifications. As a result, it is inconvenient to check for source code changes, or resolve problems related to merge conflicts.

Link supports the version control system of Git, so users can manage the source code version and manage the change history of one’s pipeline and pipeline source code. In particular, users can conveniently check changes in the pipeline and source code after one’s commit. Also, when a conflict occurs merging a Git branch, users can easily manage conflicts through the Pipeline Panel.


Example of Version Control

Easy to collaborate

Code readability and reusability tend to suffer due to JupyterLab’s flexibility. This issue poses collaboration challenges. To tackle difficulties in collaboration, Link provides the four sharing features described below.

First, users can save and share a pipeline.
(Export/Import link pipeline)

Second, users can save and share the library and module the project/collaboration team. (Export/Import link component)

Third, users can save and share execution results (cache) to reduce unnecessary, redundant execution.
(Export/Import cache)

Fourth, users can save and share code (Python code only) that can be executed in minimal environments(e.g. CLI environment).
(Save Python file: Save as an Executable Script (.py))