you you contrust the [n-plet] for such a state. ontology or something esle
0
Anonymous
22d ago
In Graph you used triplets in HypoRag. What about if you want also to train the evolution of changeing in infomation in other dimentions. like time or context . in simple words what is a node path change in relation to the neigbords in time and this change.
0
Anonymous
27d ago
"Can the statement 'LLMs can be highly receptive to external evidence' indicate that any method requiring input to the model for evaluating its memory is unreliable? Or is there a way to determine whether the influence of the input we provide to the model on the results of evaluating its memory can be considered negligible?"
4
Anonymous
27d ago
As LLMs evolved, would hipporag become more effective?
0
Anonymous
27d ago
If you have a large enough context, wouldn't a reasoning model figure the transitive relationships out just by generation of lot of tokens?
1
Anonymous
27d ago
I have seen the mechanistic interpretation confirming hypothesis first time. Do you think there is a larger application for this?
1
Anonymous
27d ago
Do you think LLM world model-based Planner can handle disjunctive goals and also conjunctive/ composition of goals?
3
Anonymous
27d ago
Can Grokking help with the reversal curse? Knowing that A = B is the same as B = A might not be considered OOD...
3
Anonymous
27d ago
Could the main reason that LLMs are receptive to new knowledge or knowledge editing be due to RLHF?
3
Anonymous
27d ago
Will this hippo RAG be less useful as LLM evloves?
1
Anonymous
27d ago
So the grokking happens when we overfit the training data many times?
2
Anonymous
27d ago
Given past and current progress, how soon do you expect advanced agents to be able to automate most (say, >50%) of remote work? Does this seem like it's just around the corner? Or do we have more like 10 or so years until this happens?
2
Anonymous
27d ago
Do standard anti-overfitting (i.e. regularization) techniques like dropout help or hurt this grokking of reasoning techniques?
6
Anonymous
27d ago
are all training data just* some facts or there are also value-laden statements? How does LLM reasoning handle value-laden statements and fact/value distinction issues
3
Anonymous
27d ago
are all training data still some facts or there are also value-laden statements? How does LLM reasoning handle value-laden statements and fact/value distinction issues
1
Anonymous
27d ago
Given that the 'compositionality gap' does not decrease with scale, what alternative architectural changes or training paradigms (e.g., symbolic reasoning, knowledge graph integration, or curriculum learning) could help LLMs improve in multi-hop reasoning and implicit comparison tasks?
4
Anonymous
27d ago
"Can the statement 'LLMs can be highly receptive to external evidence' indicate that any method requiring input to the model for evaluating its memory is unreliable? Or is there a way to determine whether the influence of the input we provide to the model on the results of evaluating its memory can be considered negligible?"
4
Anonymous
27d ago
Since the model performs differently on comparison vs. composition tasks. Are there any intuitive reasons why comparison reasoning is harder/easier for Transformers?
0
Anonymous
27d ago
I have two questions: (1) Can we predict which tasks or datasets will exhibit grokking? Are there specific properties that make a problem "grokkable"? (2) Are there other forms of generalization beyond grokking? How can we ensure that models generalize robustly to truly novel situations, not just variations of the training data?
3
Anonymous
27d ago
I have two questions:
0
Anonymous
27d ago
Could the main reason that LLMs are receptive to new knowledge or knowledge editing be due to RLHF?
0
Anonymous
27d ago
how does the size of the LLM influence the reasoning capability?
1
Anonymous
27d ago
best approach for episodic mentioning, and how to combine with current
3
Anonymous
27d ago
Two parts of RAG at least - one is storage and one is retrieval. So, maybe two scaling questions - how much can we store, and how does retrieval scale with storage?
0
Anonymous
27d ago
How do you see HippoRAG and the knowledge graph it builds in its "hippocampus" in comparison to "classical" knowledge graphs? (like Google Knowledge Graph, DBpedia, Wikidata)
6
Anonymous
27d ago
will the knowledge graph be updated given new information? if yes, how?
3
Anonymous
27d ago
How do you distinguish between episodic and semantic memory in the design of the long-term memory
3
Anonymous
27d ago
How big can RAG store get.. in terms of size?
0
Anonymous
27d ago
How did you ensure that each triplet extracted was high-quality? I have faced a lot of low-information triplets in my Knowledge Graphs in the past, so I was wondering what techniques you used to optimize the graphs.
2
Anonymous
27d ago
knowledge model entity/editing (injecting other memories) - does he mean prompt injection what hackers use to alter facts in the data?
1
Anonymous
27d ago
hey
2
Questions
Newest
Anonymous
22d ago
you you contrust the [n-plet] for such a state. ontology or something esle
0
Anonymous
22d ago
In Graph you used triplets in HypoRag. What about if you want also to train the evolution of changeing in infomation in other dimentions. like time or context . in simple words what is a node path change in relation to the neigbords in time and this change.
0
Anonymous
27d ago
"Can the statement 'LLMs can be highly receptive to external evidence' indicate that any method requiring input to the model for evaluating its memory is unreliable? Or is there a way to determine whether the influence of the input we provide to the model on the results of evaluating its memory can be considered negligible?"
4
Anonymous
27d ago
As LLMs evolved, would hipporag become more effective?
0
Anonymous
27d ago
If you have a large enough context, wouldn't a reasoning model figure the transitive relationships out just by generation of lot of tokens?
1
Anonymous
27d ago
I have seen the mechanistic interpretation confirming hypothesis first time. Do you think there is a larger application for this?
1
Anonymous
27d ago
Do you think LLM world model-based Planner can handle disjunctive goals and also conjunctive/ composition of goals?
3
Anonymous
27d ago
Can Grokking help with the reversal curse? Knowing that A = B is the same as B = A might not be considered OOD...
3
Anonymous
27d ago
Could the main reason that LLMs are receptive to new knowledge or knowledge editing be due to RLHF?
3
Anonymous
27d ago
Will this hippo RAG be less useful as LLM evloves?
1
Anonymous
27d ago
So the grokking happens when we overfit the training data many times?
2
Anonymous
27d ago
Given past and current progress, how soon do you expect advanced agents to be able to automate most (say, >50%) of remote work? Does this seem like it's just around the corner? Or do we have more like 10 or so years until this happens?
2
Anonymous
27d ago
Do standard anti-overfitting (i.e. regularization) techniques like dropout help or hurt this grokking of reasoning techniques?
6
Anonymous
27d ago
are all training data just* some facts or there are also value-laden statements? How does LLM reasoning handle value-laden statements and fact/value distinction issues
3
Anonymous
27d ago
are all training data still some facts or there are also value-laden statements? How does LLM reasoning handle value-laden statements and fact/value distinction issues
1
Anonymous
27d ago
Given that the 'compositionality gap' does not decrease with scale, what alternative architectural changes or training paradigms (e.g., symbolic reasoning, knowledge graph integration, or curriculum learning) could help LLMs improve in multi-hop reasoning and implicit comparison tasks?
4
Anonymous
27d ago
"Can the statement 'LLMs can be highly receptive to external evidence' indicate that any method requiring input to the model for evaluating its memory is unreliable? Or is there a way to determine whether the influence of the input we provide to the model on the results of evaluating its memory can be considered negligible?"
4
Anonymous
27d ago
Since the model performs differently on comparison vs. composition tasks. Are there any intuitive reasons why comparison reasoning is harder/easier for Transformers?
0
Anonymous
27d ago
I have two questions: (1) Can we predict which tasks or datasets will exhibit grokking? Are there specific properties that make a problem "grokkable"? (2) Are there other forms of generalization beyond grokking? How can we ensure that models generalize robustly to truly novel situations, not just variations of the training data?
3
Anonymous
27d ago
I have two questions:
0
Anonymous
27d ago
Could the main reason that LLMs are receptive to new knowledge or knowledge editing be due to RLHF?
0
Anonymous
27d ago
how does the size of the LLM influence the reasoning capability?
1
Anonymous
27d ago
best approach for episodic mentioning, and how to combine with current
3
Anonymous
27d ago
Two parts of RAG at least - one is storage and one is retrieval. So, maybe two scaling questions - how much can we store, and how does retrieval scale with storage?
0
Anonymous
27d ago
How do you see HippoRAG and the knowledge graph it builds in its "hippocampus" in comparison to "classical" knowledge graphs? (like Google Knowledge Graph, DBpedia, Wikidata)
6
Anonymous
27d ago
will the knowledge graph be updated given new information? if yes, how?
3
Anonymous
27d ago
How do you distinguish between episodic and semantic memory in the design of the long-term memory
3
Anonymous
27d ago
How big can RAG store get.. in terms of size?
0
Anonymous
27d ago
How did you ensure that each triplet extracted was high-quality? I have faced a lot of low-information triplets in my Knowledge Graphs in the past, so I was wondering what techniques you used to optimize the graphs.
2
Anonymous
27d ago
knowledge model entity/editing (injecting other memories) - does he mean prompt injection what hackers use to alter facts in the data?