Jump to content
  • 1

GG freezes after to much loops


Platonic
 Share

Question

 

Script must load all addresses withing range, with a offset of 4 and store all of it in new table named "table". Problem is that GG crashes or gets stuck after a minute. 

ranges = gg.getRangesList("anon:libc_malloc")
anon = {}
for i, v in ipairs(ranges) do
  if v.state == "Ca" then
    anon[#anon + 1] = {anonStart = ranges[i].start, anonEnd = ranges[i]["end"], loop = ranges[i]["end"] - ranges[i].start}
  end
end
table = {}
-- e = {}
for i, v in ipairs(anon) do
  for j = 1, v.loop/4 do
    table[#table + 1] = {address = v.anonStart, flags = gg.TYPE_QWORD}
    v.anonStart = v.anonStart + 4
  end
end
gg.loadResults(table) -- gg gets stuck

I understand that i can use gg. searchNumber(), but i want/need all addresses and want to load them when i have to using gg loadResults(). Is there q fix for the crash?

Link to comment
Share on other sites

15 answers to this question

Recommended Posts

  • 0

for me  the main problem is optimization,

the problem i have noticed

  •  you are using global variable everywhere

Use local variable, they are more fast

  • can you explain why you double loop here ?

The first loop might be ok but the second one i think there is to much iteration going on here, an address divided by 4 still result a big number i think

5 hours ago, Platonic said:
for i, v in ipairs(anon) do
  for j = 1, v.loop/4 do
    table[#table + 1] = {address = v.anonStart, flags = gg.TYPE_QWORD}
    v.anonStart = v.anonStart + 4
  end
end

after test, this his approximately how much time you loop every time, the list still go down, so it is totally normal that you crash.

image.thumb.png.79152ce2e1be372a8ba31d3c24a03e84.png

code used for test

local ranges = gg.getRangesList("anon:libc_malloc")

for i, v in ipairs(ranges) do
  print('( v["end"] - v.start ) / 4 => ', (v["end"] - v.start) / 4)
end

 

  • You just override a native libary

Unless you voluntary does it, table is a native library, so when you use it as a variable name you just override everything from it

  • You are using #identifiant +1 to set your table index

this is a big performance problem, unless you are looping just 10 or 20 time that might be ok, but here you will loop more that 1000 time i guess event more,what the # operator does on a table ? actually it will also loop to count every item on your table so as you guess, when you have a 10k+ item on your table imagine how slow and memory that will take.

in the beginning you say you must load  value within offset of 4 but since you are using DWORD, offset 4 mean just the next address, since DWORD value are 8bit encoded

 

 

 

Edited by MAARS
Link to comment
Share on other sites

  • 1

for instance this is approximately how much value you are dealing with in Ca region, it is a nightmare, i have 16GB RAM PC 8GB RAM Emulator but it still take decade, you need to change your approach, this will never succeed.

code used for test

gg.setRanges(gg.REGION_C_ALLOC)
gg.searchFuzzy('0', gg.SIGN_FUZZY_EQUAL, gg.TYPE_DWORD, 0, -1, 0)

image.thumb.png.ace483dece8bc130a9c6c636e9e84455.png

this is the approach i used, at least no crash but it take too long as well

gg.clearResults()

gg.setRanges(gg.REGION_C_ALLOC)
gg.searchFuzzy('0', gg.SIGN_FUZZY_EQUAL, gg.TYPE_DWORD, 0, -1, 0)

local resultsCount = gg.getResultsCount()

if (resultsCount == 0) then
  print('No results found')
  return
end

local results = gg.getResults(resultsCount)

for i = 1, resultsCount, 4 do
  results[i] = nil
end

gg.clearResults()
gg.loadResults(results)

 

Edited by MAARS
Link to comment
Share on other sites

  • 0
42 minutes ago, under_score said:

android 6.0 was released in 2015 (7 years ago)

i think you need to buy a newer phone

Its unfortunate that there is no maturity test before making an account. It would be great if you can change the attitude a bit. People that want to show how superior they are do act in a way as you do, Could be by giving joke answers to serious questions. Or simply enable to have a proper conversation. Very self centered and serious arrogance. 

My question remains.

Link to comment
Share on other sites

  • 0
16 minutes ago, Platonic said:

People that want to show how superior they are do act in a way as you do

thats just how technology works.

you cant expect a 7 year old phone to have the best performance.

Edit: thats like going to a computer shop and complaining that your 7 year old computer cant run the latest AAA games at max graphics

Edited by under_score
Link to comment
Share on other sites

  • 0
3 hours ago, under_score said:

android 6.0 was released in 2015 (7 years ago)

i think you need to buy a newer phone

he never tell version adroid maybe version is update and error come for another reason

maybe it requier this setting data in ram check if u have data in ram

Screenshot_20221201_125816_F1 VM.jpg

Link to comment
Share on other sites

  • 0

additionnal issue found.

the first loop is useless, you already flitered range using "anon:libc_malloc" that mean the returned ranges list state will only be "Ca".

Edited by MAARS
Link to comment
Share on other sites

  • 0

Hi! Just shedding some Idea:

  • - The codes keep storing a new Key to a global variable, the performance decrease could be because of the programs having a hard time to resize the table. It's not the global scope at fault, but the I/O Operation that should be blamed.
  • - When dealing with this, I usually use cache for storing and delete them automaticly after recursion ends. You might need to limit I/O operation here. Maybe storing specific number of item into the table and immediately process them in Anon, the result are dumped into cache files and the cycle continue.

More or less, I do agree with Mars. The Emulator part can be different from Native because most Emulator are using VBox that are slow by default compared to Qemu/Cemu. The thing is: even it's "Hard" operation, personally, I wouldn't rely on Emulator result but rather to test it directly on Native environment.

EDIT1:
Testing the script on an 8GB with 6GB Free Ram (Phone), Also freezes but not crashing.

Edited by MainC
Link to comment
Share on other sites

  • 0

The main issue is with the approach itself. Processing all results/values at once isn't going to work when there are millions of them. There is simply not enough memory that is available to Java part of the application neither to load millions of results at once nor to build a table in Lua script that represents millions of values. The solution is to process results/values in parts. Reasonable choice for part size is around 100k, since in practice GG can load 100k results fine in most cases.

Another issue is related to efficiency as MAARS also mentioned above. First thing to avoid is using "gg.loadResults" function when it's not necessary. If it's applicable, use searches to get all results for processing, then process them in parts by getting results and removing them after the part is processed. Alternatively build a table with part of values to process, but then don't use results list, proceed to getting/setting values directly using corresponding API functions, then repeat for remaining parts.

Link to comment
Share on other sites

  • 0

based on both @MainC@CmP i come up with this solution

I cluster the operation in a 100K value table, and remove 4 value of 1 mean remove 4 address every 1 address, since what the author wanna do i guess, if not just modify that logic in the code.

as you guessed it, my approach is, we start with all values then remove the unwanted one.

but this is still slow af, and rn i dont see anyway to speed this up

another way to get rid of the value you dont want is to filter by address, since you want to skip by offset 4

DWORD address are set up like this

address = 0x0

next = previous address + 4 = 0x4

next = previous +4 = 0x8

and so on.

then you just need to keep address that end with 4 in case you skip the first of the list, else you keep those that end with 0 or 8

index.lua approach 1

index.lua approach 2

 

Edited by MAARS
Link to comment
Share on other sites

  • 0
19 hours ago, MAARS said:

for me  the main problem is optimization,

the problem i have noticed

  •  you are using global variable everywhere

Use local variable, they are more fast

  • can you explain why you double loop here ?

The first loop might be ok but the second one i think there is to much iteration going on here, an address divided by 4 still result a big number i think

after test, this his approximately how much time you loop every time, the list still go down, so it is totally normal that you crash.

image.thumb.png.79152ce2e1be372a8ba31d3c24a03e84.png

code used for test

local ranges = gg.getRangesList("anon:libc_malloc")

for i, v in ipairs(ranges) do
  print('( v["end"] - v.start ) / 4 => ', (v["end"] - v.start) / 4)
end

 

  • You just override a native libary

Unless you voluntary does it, table is a native library, so when you use it as a variable name you just override everything from it

  • You are using #identifiant +1 to set your table index

this is a big performance problem, unless you are looping just 10 or 20 time that might be ok, but here you will loop more that 1000 time i guess event more,what the # operator does on a table ? actually it will also loop to count every item on your table so as you guess, when you have a 10k+ item on your table imagine how slow and memory that will take.

in the beginning you say you must load  value within offset of 4 but since you are using DWORD, offset 4 mean just the next address, since DWORD value are 8bit encoded

 

 

 

Appreciate the info about memory occupation in Lua and thanks as well for the script examples! Although your scripts are way more efficient, I assumed the 100k method would be faster to reach the full address range of a segment. Reason i don't want to use search is because i miss the data that gets allocated at unused memory addresses while old data gets replaced or cleared due to memory management techniques. Being dependable on addresses being used or not was in this case out of option to me, so i thought i should and could get all addresses in a reasonable time frame. And thats my mistake for thinking that was even optional with loops of 1m+

CmP is basically pointing out that it is not possible to do that(having a table that has all those addresses stored) because of memory. And if by your own method it still would take long time to process then i guess i have to work towards something else because its just not pleasant to work like that. The scripts you provided where educational for efficiency!

Edited by Platonic
Link to comment
Share on other sites

  • 0
On 12/1/2022 at 3:21 PM, MAARS said:

for me  the main problem is optimization,

the problem i have noticed

  •  you are using global variable everywhere

Use local variable, they are more fast

  • can you explain why you double loop here ?

The first loop might be ok but the second one i think there is to much iteration going on here, an address divided by 4 still result a big number i think

after test, this his approximately how much time you loop every time, the list still go down, so it is totally normal that you crash.

image.thumb.png.79152ce2e1be372a8ba31d3c24a03e84.png

code used for test

local ranges = gg.getRangesList("anon:libc_malloc")

for i, v in ipairs(ranges) do
  print('( v["end"] - v.start ) / 4 => ', (v["end"] - v.start) / 4)
end

 

  • You just override a native libary

Unless you voluntary does it, table is a native library, so when you use it as a variable name you just override everything from it

I would try not to change anything if I were you. But the update is cool as always. The update is really really cool. I didn't even expect to get something like this. But as for books, I recently wrote a review on a similar one. It helped me that I found customwriting.com specialists here who can help with that. Doing a quality review or review is worth a lot, for example for the same book. I hope you will find this helpful too.

  • You are using #identifiant +1 to set your table index

this is a big performance problem, unless you are looping just 10 or 20 time that might be ok, but here you will loop more that 1000 time i guess event more,what the # operator does on a table ? actually it will also loop to count every item on your table so as you guess, when you have a 10k+ item on your table imagine how slow and memory that will take.

in the beginning you say you must load  value within offset of 4 but since you are using DWORD, offset 4 mean just the next address, since DWORD value are 8bit encoded

 

 

 

I outsourced such problems because I could not fix optimization myself, it is a complex and difficult task.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.