Sage-2.1 style fit benchmarks | @Chapin Cavender | Slides Recording Updated fitting workflow to be similar to Sage 2.1, e.g. start from MSM, additional small molecule data, torsions fit from opt geometries, then did experiments on top of that JW: If favorability of alpha is perfect, would value be 0? (slide 8) CC: No, would match value from QM scan. Favorability is relative favorability of alpha vs beta, not FF vs QM DM: Might be easier to parse if you presented error relative to QM
JW: is favorability sensitive to boundary drawn between alpha and beta? PB: What priors are you using? CC: Initial values: 0.0.2 is Sage 2.0.0, 0.0.3 are regularized to MSM or Sage 2.1 CC: priors are from Sage 2.1 PB: We tightened priors for 2.2 CC: Okay, we can try that in the future
JW: How to interpret the numbers in slide 9? CC: chi2 – ranks agreement with expt, higher is better average over 15 peptides that we have scalar couplings for, 3-7 residues more details in recording around 15 mins
JW: Where column has only 2 bars, were the other ones not run? “ff14SB Only SC” = only fit to side chains (from QM and not scalar couplings); ff14SB backbone is fit to scalar couplings
MG: How would delta show up on slide 18 metrics? Would be interesting to look at relative favorability of delta vs alpha, given previous result about delta region being highly sampled by new FF CC: Sure, could compute that. Alpha vs delta will be very sensitive to boundary, since delta is a broad peak surrounding the alpha region MG: Do we have QM results? CC: Not for 15-mer, too big
MS: How well does FF14SB only quantum version fold? CC: Haven’t done that yet but interested to look CC: Does better on unstructured peptides, so suspect it would not fold well and might be similar to ours
JW: Would expect ideal distribution for AAQAA to be 100% helical? CC: No, expect about 50% helical, but depends a little on what part of the protein JW: Do you know what region the rest of the protein should be? CC: No, just know 50% helix.
MS: How would delta show up in data on slide 12? DM: Are you excited about this? This seems great MS: Is there a reason we think this fixed it? Or just random? CC: Still benchmarking null model to see if we need specific, seems like null + new workflow does not help helicity. Suggests protein needs to be treated separately MG: Yes but we’ve done null vs specific before, why did this specific one work? CC: Starting from MSM would potentially give better starting point for bonds and angles, doesn’t directly affect torsions but wouldn’t have to account for bad angles/bonds CC: I think it’s just fitting the QM data more precisely, which leads to better helicity. But not immediately clear what fixed it
JW: How did 0.0.3 DW do? LW: With helicity plots, would you expect change with more simulation time? CC: For 3-point models, trend will probably stay the same, may expect better sampling of helix terminus (e.g. shape of curve would improve but not height, less noise on edges). For OPC, still needs more time to fold LW: Would that also apply to ratios of secondary structures? CC: Yes
|
AbInitio target fits | @Chapin Cavender | Switch TD for ab initio targets Performs pretty similar to TD version, but makes alpha less favorable JW: What’s up with Null-0.0.3-AI-DW on slide 22? LW: Did you try one with only protein TD converted to AI? JW: One of themes of the talk is that ff14SB only works with TIP3P, seems like we work well with OPC, which is a better water model MG: Peptide J coupling is a limited benchmark though CC: Sure, but seems to generalize to other benchmarks for larger systems JW: Glad that OPC3 also is good since users won’t be happy about a 4-point water model MS: But 4-point are better, good that we are compatible
JW: Glad to see we can get helicity fitting only to QM
|