Schnellster Weg zum Durchsuchen von 5-km-Zeilen innerhalb des 100-m-Zeilenpaar-DatenframePython

Python-Programme
Anonymous
 Schnellster Weg zum Durchsuchen von 5-km-Zeilen innerhalb des 100-m-Zeilenpaar-Datenframe

Post by Anonymous »

Ich bin mir nicht sicher, ob der Titel das Problem gut beschreibt, aber ich werde es Schritt für Schritt erklären. class="s-table">


gene1
gene2
score




Gene3450
Gene9123 < /td>
0.999706
< /tr>

Gen5219 < /td>
Gene9161 < /td>
< /> < /> < /> < /> < /> < /> 0.9999161 < /TD>
< /> ph; />

Gene27
Gene6467
0.999646


Gene3255
Gene4865 < /td>
0.999636
< /tr>

Gen2512 < /td>
Gene5730 < /td>
; /> < /tr>

...
...
...
< /tr>
< /tbody> < /> < /table> < /div>
< /> dann habe ich gold- und standard-tabel cable tabel und tabel y /div>
. used_genes



id
name
used_genes




1
Complex 1
[Gene3629, Gene8048, Gene9660, Gene4180, Gene1...]


2
Komplex 2 < /td>
[Gene3944, Gene931, Gen3769, Gen7523, Gene61 ​​...] < /td>
< /tr>

3 < /td>
komplex 3 < /td> < /td>
< /td> < /td> < /> < /td> < /td>
< /> < /td> < /td> < /> />[Gene8236, Gene934, Gene5902, Gene165, Gene664...]


4
Complex 4
[Gene2399, Gene2236, Gene8932, Gene6670, Gen2 ...] < /td>
< /tr>

5 < /td>
Komplex 5 < /td>
/> < /tr>
< /tbody>
< /table> < /div>
Was ich tue: < /p>

i Iterate jeder Goldstandard-Reihe. />etc.
< /li>
Überprüfen Sie diese Komplex-Reis-Genpaare in den gestapelten Korrelationspaaren. Berechnen Sie Präzision, Rückruf und Fläche unter
Die Kurvenbewertung. />auc_score




Multisubunit ACTR coactivator complex
[CREBBP, KAT2B, NCOA3, EP300]
0.001695
< /tr>

Kondenssin I -Komplex < /td>
[Smc4, NCAPH, SMC2, NCAPG, NCAPD2] < /td>
> 0.009233333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333. />
BLOC-2 (biogenesis of lysosome-related organel...)
[HPS3, HPS5, HPS6]
0.000529


NCOR complex
[TBL1XR1, NCOR1, TBL1X, GPS2, HDAC3, CORO2A] < /td>
0.000839
< /tr>

bloc-1 (Biogensese von lysosom-organs /> [dtnbp1, snapin, bloc1s6, bloc1s1, bloc1s5, bl ...] < /td>
0.002227
< /tr>
< /tbody>
< /tabel> < /div>

Ich werde meine Funktion unten mit 100 m gestapelten DF und 5K-Begriffen teilen, und ich versuche einen Weg zu finden, um die Zeit zu verkürzen. Dasselbe. Ich denke, das Problem ist Iterationsteil. < /P>

Code: Select all

from sklearn import metrics
def compute_per_complex_pr(corr_df, terms_df):

pairwise_df = binary(corr_df)
pairwise_df = quick_sort(pairwise_df).reset_index(drop=True)

# Precompute a mapping from each gene to the row indices in the pairwise DataFrame where it appears.
gene_to_pair_indices = {}
for i, (gene_a, gene_b) in enumerate(zip(pairwise_df["gene1"], pairwise_df["gene2"])):
gene_to_pair_indices.setdefault(gene_a, []).append(i)
gene_to_pair_indices.setdefault(gene_b, []).append(i)

# Initialize AUC scores (one for each complex) with NaNs.
auc_scores = np.full(len(terms_df), np.nan)

# Loop over each gene complex
for idx, row in terms_df.iterrows():
gene_set = set(row.used_genes)

# Collect all row indices in the pairwise data where either gene belongs to the complex.
candidate_indices = set()
for gene in gene_set:
candidate_indices.update(gene_to_pair_indices.get(gene, []))
candidate_indices = sorted(candidate_indices)

if not candidate_indices:
continue

# Select only the relevant pairwise comparisons.
sub_df = pairwise_df.loc[candidate_indices]
# A prediction is 1 if both genes in the pair are in the complex; otherwise 0.
predictions = (sub_df["gene1"].isin(gene_set) & sub_df["gene2"].isin(gene_set)).astype(int)

if predictions.sum() == 0:
continue

# Compute cumulative true positives and derive precision and recall.
true_positive_cumsum = predictions.cumsum()
precision = true_positive_cumsum / (np.arange(len(predictions)) + 1)
recall = true_positive_cumsum / true_positive_cumsum.iloc[-1]

if len(recall) < 2 or recall.iloc[-1] == 0:
continue

auc_scores[idx] = metrics.auc(recall, precision)

# Add the computed AUC scores to the terms DataFrame.
terms_df["auc_score"] = auc_scores
return terms_df

def binary(corr):
stack = corr.stack().rename_axis(index=['gene1', 'gene2']).reset_index(name='score')
stack = drop_mirror_pairs(stack)
return stack

def quick_sort(df, ascending=False):
order = 1 if ascending else -1
sorted_df = df.iloc[np.argsort(order * df["score"].values)].reset_index(drop=True)
return sorted_df

def drop_mirror_pairs(df):
gene_pairs = np.sort(df[["gene1", "gene2"]].to_numpy(), axis=1)
df.loc[:, ["gene1", "gene2"]] = gene_pairs
df = df.loc[~df.duplicated(subset=["gene1", "gene2"], keep="first")]
return df
< /code>
Für Dummydaten (Corr Matrix, Begriffe_DF) < /p>
import numpy as np
import pandas as pd

# Set a random seed for reproducibility
np.random.seed(0)

# -------------------------------
# Create the 10,000 x 10,000 correlation matrix
# -------------------------------
num_genes = 10000
genes = [f"Gene{i}" for i in range(num_genes)]

rand_matrix = np.random.uniform(-1, 1, (num_genes, num_genes))
corr_matrix = (rand_matrix + rand_matrix.T) / 2
np.fill_diagonal(corr_matrix, 1.0)

corr_df = pd.DataFrame(corr_matrix, index=genes, columns=genes)

num_terms = 5000
terms_list = []

for i in range(1, num_terms + 1):
# Randomly choose a number of genes between 10 and 40 for this term
n_genes = np.random.randint(10, 41)
used_genes = np.random.choice(genes, size=n_genes, replace=False).tolist()
term = {
"id": i,
"name": f"Complex {i}",
"used_genes": used_genes
}
terms_list.append(term)

terms_df = pd.DataFrame(terms_list)

# Display sample outputs (for verification, you might want to show the first few rows)
print("Correlation Matrix Sample:")
print(corr_df.iloc[:5, :5])  # print a 5x5 sample

print("\nTerms DataFrame Sample:")
print(terms_df.head())
So führen Sie die Funktion compute_per_complex_pr (corr_df, Terms_df)

Quick Reply

Change Text Case: 
   
  • Similar Topics
    Replies
    Views
    Last post