Can FCS Express integrate Python scripts?
Although the popular algorithms for high-dimensional data analysis in cytometry are already embedded into FCS Express and do not require any additional external software or “plugins”, we know that analysis might require custom algorithms for some advanced cases and researchers. If your research group needs additional flexibility then FCS Express allows you to run Python scripts via the Python Transformation pipeline step.
FCS Express can integrate Python scripts via the Python Transformation pipeline step. The Python pipeline step allows users to run their own Python scripts as part of FCS Express pipelines and work on the resulting output directly within FCS Express.
To set up your machine for Python transformations, please complete the following installation instructions.
Please note: in FCS Express 7.18, Python versions 3.11 and later are not yet compatible. The Python transformation cannot yet be run in the Mac version of FCS Express.
This step is required for PARC and TRIMAP, which are included in our FCSExpress Anaconda environment. It may also be required for other libraries.
- Go to this link: https://visualstudio.microsoft.com/visual-cpp-build-tools/ and click on Download Build Tools.
- Double-click on Vs_Buildtools.exe (it may be in your computer’s "Downloads" folder).
- If asked to "Allow Installer to make changes", click Yes.
- If you got a message about privacy, click Continue.
- From the Workloads tab (see screenshot below), choose Desktop development with C++.
- Under Optional components in the right pane (see screenshot below), only MSVC… and Windows 10 SDK (or Windows 11 SDK) are necessary. You may “uncheck” the others to save disk space.
- Click Install.
- This will take about 4 minutes to install and about 4.51 GB of space (8.15 GB on Windows 11) are required.
- When done, exit the Visual Studio Installer window.
- If you are prompted to restart your computer, please do so.
- You may now go on to the next step of Installing Anaconda, Python and Required Packages, found below.
Skip this step if you already have Python and the desired libraries installed on your computer.
If Anaconda is already installed on your computer, skip to the step below called "Launch Anaconda Navigator".
A convenient way to install Python is installing Anaconda, which is a distribution of Python for scientists (3.5 GB space is required).
- Follow the steps at the link below. You do not need to log in or sign up at any point when installing Anaconda and Python. If you are prompted to login or sign up, exit the pop-up window.
Once Anaconda is installed, we can create an environment to work with FCS Express. An environment is a collection of libraries of interest. Environments simplify library installations, avoid system pollution, sidestep dependency conflicts and minimize reproducibility Issues.
- Launch Anaconda Navigator from the Windows Start Menu
- Wait for the software to finish loading (wait for the blue animation to cease).
- If Anaconda asks to update, click Yes.
- If prompted to quit Anaconda, click Yes.
- If prompted again to update, click Update now.
- When the update is complete, click Launch Navigator.
- Click Environments.
- Download the FCSExpress environment you need:
v1.0 (July 2022) | v2.0 (Coming soon) | |
Python | 3.10.5 | 3.10.15 |
pandas | 1.4.3 | 2.2.3 |
parc | 0.33 | 0.4 |
hdbscan | 0.8.28 | 0.8.39 |
trimap | 1.0.15 | 1.0.15 |
opentsne | 0.6.2 | 1.0.2 |
pacmap | 0.6.5 | 0.7.6 |
ivis | 2.0.7 | 2.0.11 |
scipy | 1.8.1 | 1.14.1 |
scikit-learn | 1.1.1 | 1.6.0 |
phate | 1.0.8 | 1.0.11 |
umap-learn | 0.5.3 | 0.5.7 |
numba | 0.55.2 | 0.60.0 |
cytonormpy | - | 0.0.4 |
numpy | 1.22.4 | 1.26.4 |
- In Anaconda > Environments, click Import.
- Select Local drive and click on the File Folder icon next to it
- Find the FCSExpress_v#.yml file that you downloaded from the website. It may be in your computer’s "Downloads" folder.
- Click Open.
- Click Import.
- The FCSExpress environment has now been loaded into your Anaconda Navigator.
If the Python directory you wish to use in FCS Express is the Default, Registered Python on your computer, you may skip this step. The Default, Registered Python can be found in Windows RegistryEditor > Computer\HKEY_CURRENT_USER\Software\Python\PythonCore\3.X\InstallPath
- Launch FCS Express version 7.08.0018 or later.
- Open a New Layout
- Click File > Options.
- In the left pane, click Files/Directories
- Under Directory for Python (if not using Default, Registered Python), enter the path where the pythonXXX.dll is located for the environment you wish to use.
-
- If you have installed Python via Anaconda, (and wish to use the FCSExpress environment) enter the path of your FCSExpress Anaconda environment into your FCS Express user option (Files/Directories > Directory for Python)
- The path is found by hovering over the Anaconda environment (i.e. C:\Users\krittenbach\Anaconda3\envs\FCSExpress in the screenshot below)
- If you have installed Python via Anaconda, (and wish to use the FCSExpress environment) enter the path of your FCSExpress Anaconda environment into your FCS Express user option (Files/Directories > Directory for Python)
-
-
- If desired, the Python directory (for an Anaconda environment) can alternatively be found and copied by the following steps:
- Click on the desired environment name in the Anaconda Navigator.
- Click the green "play" button next to the environment name.
- Click Open Terminal
- Enter the following text: where python and press the Enter key
- Copy the path that contains your environment (i.e. C:\Users\krittenbach\Anaconda3\envs\FCSExpress)
- If desired, the Python directory (for an Anaconda environment) can alternatively be found and copied by the following steps:
-
-
-
-
- Paste the path into the FCS Express user option (Files/Directories > Directory for Python)
-
-
-
- If you installed Python from python.org, the path is likely similar to: C:\Users\XXXXXXXX\AppData\Local\Programs\Python\PythonXXX
- In FCS Express, click OK to apply the new user option.
- You can now try using the example scripts (provided on this page) in FCS Express.
- Once a Pipeline is created, the Python transformation pipeline step can be added from the + > Miscellaneous category.
- If the pipeline is already applied to a plot, you may get the warning below asking whether you want to disable the automatic execution of the pipeline and control the execution by the "Execute Transformation Pipeline" button instead.
- The Python Transformation pipeline steps will be added to the pipeline.
- Select the input parameters of interest.
- From one of the example scripts included on this webpage (below), highlight all of the text beginning with the "#". Press Ctrl+C or right-click>Copy.
- In the Python Script text box, right right-click and choose Clear Script to clear the example script .
- Ensure the cursor is in the left-most position of the first line. To confirm this, press the Home button on your keyboard.
- In FCS Express, click inside the Python Script text box and press Ctrl+V to paste the script. Note: Right-Click > Paste is not available.
- Click Save Script.
- The User Defined Options section of the Python Transformation step will then be populated based on the script code.
- Drag the Pipeline from the Transformations window onto the plot in the layout.
- If you do not have "Automatically Run Pipeline" checked, press Execute Transformation Pipeline in the pipeline root step.
- Running the pipeline may take a few seconds or several minutes, depending on the file size, chosen script and computer power.
- Once the transformation is applied to a plot and the run is completed, the output parameters generated by the pipeline will be accessible as any other classical parameters by clicking on the X and Y axis title of the plot of interest (e.g. Trimap 1 and Trimap 2 in the screenshot below).
Note: When the script is done running, the title of your plot will say "Pipeline transformed". In case your plot title does not contain the filename, you can check if the transformation is applied in the Overlay formatting dialog for that plot.
Some example scripts are presented below. Please see our full documentation on using the Python pipeline step in the FCS Express manual for more information on creating and implementing scripts.
In this example script, the Python Transformation pipeline step is defined to run PARC (PARC: ultrafast and accurate clustering of phenotypic data of millions of single cells; Shobana V Stassen; Bioinformatics, Volume 36, Issue 9, 1 May 2020).
Note: to run this script, Python, pandas and parc should be properly installed on the user computer.
#This script allows you to run PARC as a Python Transformation pipeline step in FCS Express 7)
#Detailed info on PARC in Python can be found at https://github.com/ShobiStassen/PARC
from FcsExpress import *
try:
from pandas import DataFrame
except ModuleNotFoundError:
raise RuntimeError("Current Python distribution does not contain Pandas. " +
"To test this script please install pandas: " +
"(https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html)")
try:
import parc
except ModuleNotFoundError:
raise RuntimeError("Current Python distribution does not contain parc. " +
"To test this script please install parc: " +
"(https://github.com/ShobiStassen/PARC)")
def RegisterOptions(params):
RegisterIntegerOption("dist_std_local",2)
RegisterStringOption("jac_std_global","median")
RegisterIntegerOption("resolution_parameter",1)
def RegisterParameters(opts, params):
RegisterClassification("Parc Cluster Assignments")
def Execute(opts, data, res):
df = DataFrame(data)
#Function to check whether an object is float
def isfloat(num):
try:
float(num)
return True
except ValueError:
return False
#checking whether opts["jac_std_global"] is set to median or whether it is float
if opts["jac_std_global"]=="median":
jac_std_global="median"
elif isfloat(opts["jac_std_global"])==True:
jac_std_global=float(opts["jac_std_global"])
else:
raise Exception("The jac_std_global only accept median or numerical values")
print("Running PARC with dist_std_local {}, jac_std_global {}, resolution_parameter {}, on {} data points".format(opts["dist_std_local"],jac_std_global,opts["resolution_parameter"],NumberDataPoints))
parc_obj = parc.PARC(df.to_numpy(),dist_std_local=opts["dist_std_local"],jac_std_global=jac_std_global,resolution_parameter=opts["resolution_parameter"]) #initiate PARC
parc_obj.run_PARC() #run the clustering
res["Parc Cluster Assignments"] = parc_obj.labels #saving the cluster assignment into the res object
return res #return the res object that will be re-imported into FCS Express
In this example script, the Python Transformation pipeline step is defined to run HDBSCAN (Campello R.J.G.B., Moulavi D., Sander J. (2013) Density-Based Clustering Based on Hierarchical Density Estimates. In: Pei J., Tseng V.S., Cao L., Motoda H., Xu G. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2013. Lecture Notes in Computer Science, vol 7819. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37456-2_14).
HDBSCAN can be useful to identify clusters on UMAP plots. When HDBSCA is used for this purpose, creating a UMAP map with dense clusters (i.e. keeping the UMAP "Min Low Dim Distance" value low) will help HDBSCAN clustering.
More information on DBSCAN for Python can be found here.
Note #1: to run this script, Python, pandas, and hdbscan should be properly installed on the user computer.
Note #2: If noisy data points are detected by HDBSCA, they will be grouped under "HDBSCAN Cluster Assignment" equal to "1" in FCS Express.
Note #3: with large dataset, HDBSCAN might return a "Could not find -c" error. If that happens, setting core_dist_n_jobs to 1 might solve the issue.
#HDBSCAN
from FcsExpress import *
try:
from pandas import DataFrame
except ModuleNotFoundError:
raise RuntimeError("Current Python distribution does not contain Pandas. " +
"To test this script please install pandas: " +
"(https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html)")
try:
import hdbscan
except ModuleNotFoundError:
raise RuntimeError("Current Python distribution does not contain hdbscan. " +
"To test this script please install hdbscan: " +
"(https://pypi.org/project/hdbscan/)")
from collections import Counter
def RegisterOptions(params):
RegisterIntegerOption("min_cluster_size",5)
RegisterStringOption("min_samples","None")
RegisterIntegerOption("core_dist_n_jobs",4)
def RegisterParameters(opts, params):
RegisterClassification("HDBSCAN Cluster Assignments")
RegisterParameter("HDBSCAN Outlier scores")
def Execute(opts, data, res):
df = DataFrame(data)
#See if doing default min_samples
if opts["min_samples"] == "None":
min_samples = None # Setting n_neighbors to "None" leads to a default choice
elif opts["min_samples"].isnumeric():
min_samples = int(opts["min_samples"])
else:
raise Exception("The min_samples option only accept None or integer values")
print("Running HDBSCAN on {} data points, with min_cluster_size={}, min_samples={} and core_dist_n_jobs={}".format(NumberDataPoints,opts["min_cluster_size"],min_samples,opts["core_dist_n_jobs"]))
clusterer = hdbscan.HDBSCAN(min_cluster_size=opts["min_cluster_size"],
min_samples=min_samples,
core_dist_n_jobs=opts["core_dist_n_jobs"])
cluster_labels = clusterer.fit_predict(df)
outlier_scores=clusterer.outlier_scores_
#Noisy data points are labelled by HDBScan with cluster assignemnt of -1.
#FCS Express requires cluster assignment values starting from 0.
#So, in case noisy data points are present, cluster labels are increased by 1
if min(cluster_labels)==-1:
noisy=Counter(cluster_labels)[-1] #number of noisy data points
print(noisy, "noisy datapoints have been detected" +
" and will be grouped under HDBSCAN Cluster Assignment equal to 1 in FCS Express")
cluster_labels=cluster_labels+1 # making cluster assignment values starting at 0.
res["HDBSCAN Cluster Assignments"] = cluster_labels
res["HDBSCAN Outlier scores"] = outlier_scores
return res
In this example script, the Python Transformation pipeline step is defined to run TriMap (TriMap: Large-scale Dimensionality Reduction Using Triplets; Ehsan Amid, Manfred K. Warmuth; arXiv:1910.00204; manuscript submitted on 1st Oct 2019 but rejected on 20th Dec 2019).
Note: to run this script, Python, pandas and trimap should be properly installed on the user computer.
#This script allow to run TriMap as a Python Transformation pipeline step in FCS Express 7
#Detailed info on TriMap in Python can be found at https://pypi.org/project/trimap/
from FcsExpress import *
try:
from pandas import DataFrame
except ModuleNotFoundError:
raise RuntimeError("Current Python distribution does not contain Pandas. " +
"To test this script please install pandas: " +
"(https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html)")
try:
import trimap
except ModuleNotFoundError:
raise RuntimeError("Current Python distribution does not contain trimap. " +
"To test this script please install trimap: " +
"(https://pypi.org/project/trimap/)")
parameter_map = {
"n_inliers": "Number Nearest Neighbors",
"n_outliers": "Number Outliers",
"n_random": "Number Random Triplets",
"distance": "Distance Measure",
"weight_adj": "Gamma",
"lr": "Learning Rate",
"n_iters": "Number Iterations"
}
def RegisterOptions(params):
RegisterIntegerOption(parameter_map["n_inliers"], 10)
RegisterIntegerOption(parameter_map["n_outliers"], 5)
RegisterIntegerOption(parameter_map["n_random"], 5)
RegisterStringOption(parameter_map["distance"], "euclidean")
RegisterFloatOption(parameter_map["weight_adj"], 500.0)
RegisterFloatOption(parameter_map["lr"], 1000.0)
RegisterIntegerOption(parameter_map["n_iters"], 400)
def RegisterParameters(opts, params):
RegisterParameter("Trimap 1")
RegisterParameter("Trimap 2")
def Execute(opts, data, res):
df = DataFrame(data)
embedding = trimap.TRIMAP(
n_inliers=opts[parameter_map["n_inliers"]],
n_outliers=opts[parameter_map["n_outliers"]],
n_random=opts[parameter_map["n_random"]],
distance=opts[parameter_map["distance"]],
weight_adj=opts[parameter_map["weight_adj"]],
lr=opts[parameter_map["lr"]],
n_iters=opts[parameter_map["n_iters"]]
).fit_transform(df.to_numpy())
res["Trimap 1"] = embedding[:, 0]
res["Trimap 2"] = embedding[:, 1]
return res
In this example script, the Python Transformation pipeline step is defined to run openTSNE (openTSNE: A Modular Python Library for t-SNE Dimensionality Reduction and Embedding. Poličar, P.G., Stražar, M. and Zupan, B. Journal of Statistical Software. 109, 3 (May 2024), 1–30. DOI:https://doi.org/10.18637/jss.v109.i03).
Note: to run this script, Python, pandas and openTSNE should be properly installed on the user computer.
#This script allows you to run openTSNE as a Python Transformation pipeline step in FCS Express 7 #Detailed info on openTSNE in Python can be found at https://opentsne.readthedocs.io/en/latest/index.html from FcsExpress import * try: from pandas import DataFrame except ModuleNotFoundError: raise RuntimeError("Current Python distribution does not contain Pandas. " + "To test this script please install pandas: " + "(https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html)") try: from openTSNE import TSNE except ModuleNotFoundError: raise RuntimeError("Current Python distribution does not contain OpenTSNE. " + "To test this script please install openTSNE: " + "(https://opentsne.readthedocs.io/en/latest/installation.html)") def RegisterOptions(params): RegisterIntegerOption("Number Iterations", 500) RegisterIntegerOption("Perplexity", 30) RegisterFloatOption("Burnes-Hut Theta", 0.5) RegisterStringOption("Learning Rate", "auto") RegisterStringOption("Metric", "euclidean") RegisterIntegerOption("Random State", 42) RegisterIntegerOption("Number Threads", -1) #-1 means all processors RegisterStringOption("Initialization","pca") def RegisterParameters(opts, params): RegisterParameter("openTSNE 1") RegisterParameter("openTSNE 2") def Execute(opts, data, res): df = DataFrame(data) # See if doing auto learning if opts["Learning Rate"] == "auto": l_rate = 'auto' # Auto learning will be used elif opts["Learning Rate"].isnumeric(): l_rate = float(opts["Learning Rate"]) else: raise Exception("The Learning Rate option only accept auto or floating values") #Check initialization Initializations=["pca","spectral","random"] if opts["Initialization"] not in Initializations: raise Exception("The Initialization option only accept the following values: pca, spectral, random") #Run TSNE embedding = TSNE( theta=opts["Burnes-Hut Theta"], perplexity=opts["Perplexity"], n_iter=opts["Number Iterations"], learning_rate=l_rate, metric=opts["Metric"], random_state=opts["Random State"], n_jobs=opts["Number Threads"], initialization=opts["Initialization"] ).fit(df.to_numpy()) print("Running TSNE using {} processors, with {} Initialization on {} data points, with Number of Iterations = {}, Perplexity = {}, Theta = {}, Learning Rate = {}, Metric = {}, Random State = {}".format(opts["Number Threads"],opts["Initialization"], NumberDataPoints, opts["Number Iterations"], opts["Perplexity"],opts["Burnes-Hut Theta"],l_rate,opts["Metric"],opts["Random State"])) res["openTSNE 1"] = embedding[:, 0] res["openTSNE 2"] = embedding[:, 1] return res
In this example script, the Python Transformation pipeline step is defined to run PaCMAP (Wang and Huang, at al., Understanding How Dimension Reduction Tools Work: An Empirical Approach to Deciphering t-SNE, UMAP, TriMap, and PaCMAP for Data Visualization; Journal of Machine Learning Research 22 (2021) 1-73). More information on PaCMAP for Python can be found here.
Note: to run this experiment, Python, pandas and pacmap should be properly installed on the user computer.
#This script allows you to run PaCMAP as a Python Transformation pipeline step in FCS Express 7 #Detailed info on PaCMAP in Python can be found at https://github.com/YingfanWang/PaCMAP from FcsExpress import * try: from pandas import DataFrame except ModuleNotFoundError: raise RuntimeError("Current Python distribution does not contain Pandas. " + "To test this script please install pandas: " + "(https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html)") try: import pacmap except ModuleNotFoundError: raise RuntimeError("Current Python distribution does not contain Pacmap. " + "To test this script please install pacmap: " + "(https://github.com/YingfanWang/PaCMAP)") def RegisterOptions(params): RegisterIntegerOption("Number of dimensions", 2) RegisterStringOption("Number of neighbors", "None") RegisterFloatOption("MN_ratio", 0.5) RegisterFloatOption("FP_ratio", 2.0) RegisterStringOption("Initialization","pca") RegisterIntegerOption("Random seed", 4) RegisterIntegerOption("Number of iterations", 450) def RegisterParameters(opts, params): for dim in range(opts["Number of dimensions"]): RegisterParameter("PaCMAP {}".format(dim + 1)) def Execute(opts, data, res): #Reading data df = DataFrame(data) X = df.to_numpy() #See if doing default n_neighbors if opts["Number of neighbors"] == "None": n_neighbors = None # Setting n_neighbors to "None" leads to a default choice elif opts["Number of neighbors"].isnumeric(): n_neighbors = int(opts["Number of neighbors"]) else: raise Exception("The Number of neighbors option only accept None or integer values") #Check initialization Initializations=["pca","random"] if opts["Initialization"] not in Initializations: raise Exception("The Initialization option only accept the following values: pca, random") #Initializing the pacmap instance embedding = pacmap.PaCMAP( n_components=opts["Number of dimensions"], n_neighbors=n_neighbors, MN_ratio=opts["MN_ratio"], FP_ratio=opts["FP_ratio"], random_state=opts["Random seed"], num_iters=opts["Number of iterations"], verbose=True) #Running PaCMAP X_transformed = embedding.fit_transform(X, init=opts["Initialization"]) print("Running PaCMAP with {} Initialization on {} data points, with Number of Neighbors set to {}".format(opts["Initialization"], NumberDataPoints, n_neighbors)) #Loop over dimensions and set results for dim in range(opts["Number of dimensions"]): res["PaCMAP {}".format(dim + 1)] = X_transformed[:, dim] return res
In this example script, the Python Transformation pipeline step is defined to run Ivis (Szubert, B., Cole, J.E., Monaco, C. et al. Structure-preserving visualisation of high dimensional single-cell datasets. Sci Rep 9,8914 (2019). https://doi.org/10.1038/s41598-019-45301-0). More information on Ivis for Python can be found here and here.
Note: to run this script, Python, pandas and ivis should be properly installed on the user computer.
#This script allows you to run Ivis as a Python Transformation pipeline step in FCS Express 7 #Detailed info on Ivis in Python can be found at https://bering-ivis.readthedocs.io/en/latest/python_package.html from FcsExpress import * try: from pandas import DataFrame except ModuleNotFoundError: raise RuntimeError("Current Python distribution does not contain Pandas. " + "To test this script please install pandas: " + "(https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html)") try: from ivis import Ivis except ModuleNotFoundError: raise RuntimeError("Current Python distribution does not contain ivis. " + "To test this script please install ivis: " + "(https://github.com/beringresearch/ivis)") try: from scipy import sparse except ModuleNotFoundError: raise RuntimeError("Current Python distribution does not contain scipy. " + "To test this script please install scypy: " + "(https://scipy.org/)") def RegisterOptions(params): RegisterIntegerOption("Embedding Dimensions",2) RegisterIntegerOption("Number of Nearest Neighbours",15) RegisterIntegerOption("Number of epochs without progress",10) RegisterStringOption("Type of input", "numpy array") RegisterStringOption("Keras Model", "auto") #The Number of Nearest Neighbours , the Number of epochs without progress #and the Keras Model are tunable parameters that should be selected #on the basis of dataset size and complexity. #Please refer to https://bering-ivis.readthedocs.io/en/latest/hyperparameters.html def RegisterParameters(opts, params): for dim in range(opts["Embedding Dimensions"]): RegisterParameter("Ivis {}".format(dim + 1)) def Execute(opts, data, res): df = DataFrame(data) #Converting the input data to either numpy array or spare matrix based on the user selection if opts["Type of input"]=="numpy array": X=df.to_numpy() elif opts["Type of input"]=="sparse matrix": X=sparse.csr_matrix(df) else: raise Exception("The Type of input option only accept the following values: numpy array, sparse matrix") # Determine model. For more details, see: # https://bering-ivis.readthedocs.io/en/latest/hyperparameters.html models = ["maaten","szubert","hinton"] if opts["Keras Model"]=="auto": if NumberDataPoints > 500000: keras_model = "szubert" else: keras_model = "maaten" elif opts["Keras Model"] in models: keras_model = opts["Keras Model"] else: raise Exception("The Model option only accept the following values: auto, maaten, szubert, hinton") print("Running Ivis with keras model {} on {} data points, using {} as input data type".format(keras_model, NumberDataPoints,opts["Type of input"])) # Set ivis parameters model = Ivis(embedding_dims = opts["Embedding Dimensions"], k = opts["Number of Nearest Neighbours"], n_epochs_without_progress = opts["Number of epochs without progress"], model = keras_model ) embeddings = model.fit_transform(X) # Generate embeddings #Loop over dimensions and set results for dim in range(opts["Embedding Dimensions"]): res["Ivis {}".format(dim + 1)] = embeddings[:, dim] return res #return the res object that will be re-imported into FCS Express
In this example script, the Python Transformation pipeline step is defined to run Phate (Moon, K.R., van Dijk, D., Wang, Z. et al. Visualizing structure and transitions in high-dimensional biological data. Nat Biotechnol 37, 1482–1492 (2019). https://doi.org/10.1038/s41587-019-0336-3). More information on Phate for Python can be found here.
Note: to run this script, Python, pandas and phate should be properly installed on the user computer.
#This script allows you to run Phate as a Python Transformation pipeline step in FCS Express 7.
#Detailed info on Phate in Python can be found at https://pypi.org/project/phate/
from FcsExpress import *
try:
from pandas import DataFrame
except ModuleNotFoundError:
raise RuntimeError("Current Python distribution does not contain Pandas. " +
"To test this script please install pandas: " +
"(https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html)")
try:
import phate
except ModuleNotFoundError:
raise RuntimeError("Current Python distribution does not contain phate. " +
"To test this script please install phate: " +
"(https://pypi.org/project/phate/)")
parameter_map = {
"knn": "Number Nearest Neighbors",
"n_components": "Number of dimensions"
}
def RegisterOptions(params):
#The full list of available parameters can be found at #https://phate.readthedocs.io/en/stable/api.html
RegisterIntegerOption(parameter_map["knn"], 5)
RegisterIntegerOption(parameter_map["n_components"], 2)
def RegisterParameters(opts, params):
for dim in range(opts[parameter_map["n_components"]]):
RegisterParameter("Phate {}".format(dim + 1))
def Execute(opts, data, res):
df = DataFrame(data)
phate_operator = phate.PHATE(
knn=opts[parameter_map["knn"]],
n_components=opts[parameter_map["n_components"]]
)
df_phate = phate_operator.fit_transform(df)
#Loop over dimensions and set results
for dim in range(opts[parameter_map["n_components"]]):
res["Phate {}".format(dim + 1)] = df_phate[:, dim]
return res
Coming soon...
-
Installation
-
Licenses
-
- Can I get more information regarding the Add-Ons that can be purchased with a license?
- Can I lock my template based on an electronic signature?
- Does FCS Express have any features to help meet 21 CFR Part 11 compliance?
- Does FCS Express have Quality Control features?
- Does FCS Express offer Single Sign On capability?
- How do I configure SQL Server to host a database for FCS Express?
- What database options are available when I purchase the Security option?
- What is the difference between the different types of Users that are available with a Security and Logging license?
- What is the difference between the Logging option and System Level Audit Trails?
- What SQL Server permissions are needed?
-
-
- Can I share my USB dongle or countercode license with another user?
- Can I track usage of the internet dongle?
- Can I try out the Internet Dongle before I make a purchase?
- Can the administrator log users out?
- Do you have to be connected to the internet at all times with the Internet dongle?
- How can users be added to an internet dongle license?
- How do I activate my dongle?
- How do I change my internet dongle/site license password?
- How many people can be logged in at the same time?
- How many user accounts can I create?
- If a user left the computer running can the user log themselves out from another computer?
- What are the differences between the internet dongle and network licensing options?
- What happens if I lose my internet connection?
- What happens if the user leaves the computer without logging out?
- What happens to the users login in case of an unexpected interruption? For instance, a software crash, power failure, etc.
- Why am I receiving a message that FCS Express cannot connect to De Novo Software servers?
- Show all articles ( 1 ) Collapse Articles
-
-
- Can I convert my Cytek license from the countercode licensing option to another licensing option?
- How can I claim my license purchased through BD Accuri Cytometers?
- How can I claim my license purchased through BD Biosciences?
- How can I claim my license purchased through Nexcelom Biosciences?
- How can I claim my license purchased through Sysmex-Partec GmbH?
- How can I claim the FCS Express license that came with my Cytek instrument purchase?
-
-
Layouts & Loading Data
-
- Are Beckman Coulter LMD files unique?
- Can I find a support resource page for the analysis of Cytek data in FCS Express?
- How can I easily create the "filename" column in the "ExtraKeywordsTable.csv" file?
- How can I load a Sony MA900 Index Sort file into FCS Express?
- How can I load data from the BD Accuri C6 Flow Cytometer?
- How can I load MACSQuantify files that were exported from MACSQuantify software version 3.0.1?
- How do I change the display in my plots from one data file to another data file?
- How do I export .ICE files from Thermo Cellomics HCS Studio?
- How do I tell FCS Express what plate size to use if that information is not included in the data file?
- How do I upload files to the De Novo Software FTP site?
- How do I use BD Accuri CFlow files with Multicycle DNA analysis in FCS Express?
- How do I work with images from the Thermo Scientific Attune CytPix?
- What is the Elapsed Time setting in the Gallios software and how do I convert it to real time?
- Why are iterations in my Data List gray?
- Why are there sometimes access violations when I save and load files?
- Why do I get the message that a data file exported from a FACSDiva™ Experiment is invalid?
- Show all articles ( 1 ) Collapse Articles
-
- How are existing quadrants handled when an old layout is opened in version 7.20 and later?
- How can I set quadrants to behave like conventional gates?
- How can I set quadrants to behave like in earlier versions?
- Quadrants in FCS Express versions 7.20 / 7.24 and later
- Why does the Quadrants Options window appear when I open an older layout in version 7.24?
- Why have percentages reported by quadrants changed after updating to FCS Express version 7.20.20?
-
Data Analysis
-
- Caveats when using Biexponential Scaling with automatic Below Zero parameter detection in the presence of outliers.
- How can I create a merged data with equally-sized downsampled samples?
- How can I do pre-processing for high-dimensional data analysis?
- How can I explore tSNE/UMAP plots?
- How do I use SPADE?
- What is FlowSOM?
- What is T-SNE?
- What is UMAP?
-
- Can FCS Express integrate Python scripts?
- Can I use the FlowAI script in FCS Express?
- Can I use the FlowClean R Script with FCS Express?
- How can I recreate ratiometric data acquired in FACSDiva?
- How do I use R Integration with FCS Express?
- How does FCS Express implement software compensation?
- If my data does not have a Time parameter, can I create one?
- What is compensation?
- What is the compensation workflow in FCS Express?
- When acquiring spectral data, should my single-stained controls be "as bright or brighter" than my fully-stained sample?
-
- Can a set of quadrants be both percentile and floating?
- Can I customize the display of my data from different instruments?
- Can I disable the live updating feature?
- How can I display all of my detectors for my Cytek data?
- How can I set FCS Express so my FCS 3.0 biexponential data looks the same as it did in the BD FACSDiva software?
- How do I display Summit data in FCS Express as it appears in the Summit Software?
- How do I fix the biexponential axes on a plot?
- How do I rescale CytoFLEX data so it displays as it did at acquisition?
- How do I update my density and contour plots created in Version 4 to use the newest color palette?
- What are resolution options?
- What is Biexponential and Hyperlog Scaling?
- What is the best way to set FCS Express to display FCS 3.0 data from FACSDiva on a 4 decade log scale?
- Where can I get more information regarding DNA analysis using the Multicycle AV?
- Why can’t I change my plot axis labels from the Name keyword to the Stain keyword?
- Why do my dot plots appear sparse and blocky?
- Why is the text on the right most label cut off my plot?
- Show all articles ( 1 ) Collapse Articles
-
- Can I create an output file that contains the same plot from each data file on a single page?
- How can I export my spectral data files from FCS Express with unmixing applied?
- How can I successfully export a GatingML file?
- How do the batch processing run modes differ, and why would I use them?
- Why do I get an “Old format or invalid type library” error when using Microsoft excel during batch analysis?
-
- How are statistics in FCS Express calculated compared to how they are calculated in BD FACSDiva?
- How can I display my statistical data in Scientific Notation?
- How do I calculate EC/IC Anything?
- What is “Stain Index” and how do I calculate it with FCS Express?
- What is MFI (Mean or Median Fluorescence Intensity) and how do I calculate it in FCS Express?
- Why have percentages reported by quadrants changed after updating to FCS Express version 7.20.20?
- Why is the Geometric Mean being reported as NaN or ##ERROR##?
-
-
Image Cytometry
-
- How do I adjust the axes to display small particle data from Amnis CellStream?
- How do I choose which images and parameters to view in a Data Grid?
- How do I export/save data from IDEAS software and load it in FCS Express?
- How do I make my images in the data grid larger?
- How do I pseudo-color images in a data grid?
- How do I work with Amnis derived image cytometry data in FCS Express?
-
- Can I display heat maps with my Image Cytometry data?
- Can I work with data from PerkinElmer Instruments?
- Do you offer 21 CFR Part 11 compliance options for the Image Cytometry Version?
- Do you offer image segmentation or image analysis?
- How can Attune™ CytPix data sets with images (.ACS files) be merged for high dimensional analysis?
- How do I use CellProfiler Data with FCS Express?
- How do I use ImageJ with FCS Express?
- What file formats are compatible with FCS Express Image Cytometry?
- Where can I find Nexcelom Resources and Applications?
-
-
FCS Express on Mac
-
Upgrading FCS Express
-
- Can different versions of FCS Express exist on the same computer?
- How can I view and convert my V3 layouts to FCS Express 7?
- How do I import my version 3 security databases into newer versions of FCS Express?
- How do I update Density Plots created in Version 4?
- Is there an upgrade discount from earlier versions of FCS Express?
- Version 4 Internet Dongle Retirement
- Why are my density plots from V3 not displayed correctly in later versions?
- Why are there fewer outlier dots on my FCS Express 5 and later density plots than in V4?
-
Clinical & Validation Ready
-
- Can I get more information regarding the Add-Ons that can be purchased with a license?
- Can I lock my template based on an electronic signature?
- Does FCS Express have any features to help meet 21 CFR Part 11 compliance?
- What is the difference between the different types of Users that are available with a Security and Logging license?
- What is the difference between the Logging option and System Level Audit Trails?