Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

IceKontroI

Registered
  • Content Count

    1,116
  • Joined

  • Last visited

  • Days Won

    12
  • Feedback

    100%

Everything posted by IceKontroI

  1. @CyberWizard OHJESUZ YOU DON"T KNOW? RUNESCAPE HAVE UPDATE ON EVERY THURS DAY READ A BOOK IDIOT.
  2. I have a tiny set of jangerberries.
  3. The only mule ban I ever had was when trading a lot of wealth (1+ bil) across many bots in a short period of time, and most likely happened because the method I was using on those bots was high risk.
  4. Best way to get better at anything is to set a goal that is moderately outside of your comfort zone and then accomplish it step by step. Learn the things you need to learn in order to accomplish the task and then get started. Do a little every day/week so you won't burn yourself out.
  5. Depends from person to person.
  6. You posted this earlier: If the only way for tannerOpen to be 1 is that click value is 2, then it's impossible for click value to == 0 and then tannerOpen to == 1 immediately after. Unless you have fixed the above post, you need to find out why that specific series of events is happening that way.
  7. From what I can see, your issue is coming from your code structure. You need to think about what conditions you'll allow your program to set tannerOpen == 1. You need to be absolutely certain that the ONLY reason tannerOpen can equal 1 is when you successfully click the target, and Timing.waitCrosshair() produces a value of 2. Then, and only then, does the tanner interface qualify to be open. Before you do anything, try to make 100% sure you can create a scenario in which you can successfully click the tanner and produce a waitCrosshair() value of 2, which means you clicked and it produced a red crosshair. You solve that problem and you'll probably fix the whole thing.
  8. I can only see the scenario where your tannerOpen var is set to 1, I'm assuming you set it to 0 at some point, right? If you're not resetting it back to 0 after a successful interaction, that may be a potential solution. You should be checking if the value produced by waitCrosshair() == 2 before setting tannerOpen to 1 because only when you get a red crosshair does it mean you were successful. Try again, but println the value of waitCrosshair() as well, and set tannerOpen to 1 only if waitCrosshair() returns 2. Try these changes and let us know what happens.
  9. Right after you do AccurateMouse.click(ellis[0], "Trade"), you can call Timing.waitCrossHair(100). If it returns the value 2 (red crosshair), then you successfully clicked something, probably Ellis. If it returns 1, then you missed (yellow crosshair). 0 means no crosshair appeared in 100 milliseconds, so you probably need to increase the value from 100. If increasing the value doesn't make it produce a return value of either 1 or 2, then your click function is returning true even though the click is failing. I would try a different method in that case like dynamic mouse clicking.
  10. You can try looking for the little red/yellow X to appear on screen to confirm if you were successful or not. Timing.waitCrosshair is what you'll need for that.
  11. IceKontroI

    Ban

    https://support.runescape.com/hc/en-gb/articles/115002238729-Account-Bans
  12. It's not all or nothing regarding safety. Of course what you're saying is true, and you are giving good advice, but he wants to mitigate risk, he's not asking for a 100% banproof method. At least I hope he's not.
  13. Get IntelliJ Community, it's free and awesome.
  14. I saw a guy once training behind Lumbridge castle on giant rats. He had a tinderbox, axe, and some combat equipment. He would collect the rat meat, then chop a log, light a fire, cook the meat, heal up, and start fighting again. 100% self-sufficiency, which I thought was pretty clever.
  15. I would be interested in the results. Please make a thread about it if you do end up going through with that idea.
  16. It's a lot simpler to click a thumbs up/down icon on a dialog box that pops up automatically than it is to open browser -> go to tribot.org/forums -> find the thread -> type and submit a post. This means you need to be motivated to actually go through the current lengthy feedback process, and nothing motivates a tribot botter more than getting banned. Rating systems are shifting towards a simpler format (youtube going from 5 star system to like/distlike system) because simpler formats produce more unbiased feedback. I'm not saying people will never leave unwarranted bad feedback, but that it's going to be nowhere near as bad as people seem to think it'll be.
  17. I agree with you. Overall it will reward scripts that offer a positive user experience, while punishing those that do the opposite. You said the rating should be solely based on script quality, which would be perfectly fair if scripts all had the same banrate across the board. This isn't the case, and the element of banrate shouldn't be ignored; it's arguably a more important factor to the users than how much GP/EXP the script earns per hour. Rating a script by a combination of its writing quality and banrate encourages not only high quality scripts, but also scripts that use safer methods. That's a good thing for users, which means a bigger user-base, which in turn means more customers for scripters.
  18. A system like this needs to consider bans in order to objectively capture the user experience sentiment. Is it in the scripter's best interests from a financial gain perspective? Not at all. You two are both premium scripters, so your posts quoted above present a conflict of interest for the new system.
  19. This suggestion thread is to entertain the idea of adding a user rating system to associated TRiBot scripts. Here's the implementation of such a system. Once a script is stopped, if it's a public repository script, a dialog box appears asking the user to rate their script experience with either a thumb up or down. Ratings are timestamped and repository search results can now be filtered by average user rating, with an option to only use ratings from the past {day, week, month, year}. Below are any pros and cons worth mentioning that I (or anyone in the comments) can think of. I will also propose solutions to any cons since I know that's what people will tend to focus on. @TRiLeZ @Todd @Usa Pros Provides a single unified way to score repository scripts which we currently do not have. Current ways of rating a script are: Scripter's reputation: not super accurate, only works for popular scripters. Posts on the script's thread: tedious to read through many recent posts. User testimonials: anecdotal and typically not representative of the overall experience. It will now be significantly easier for new and experienced users to choose scripts from the Repository. Script writers now have a way to gauge customer satisfaction. This will lead to: More appropriate pricing for scripts. Better script update/patch focus from the scripter. Another reason to choose TRiBot over other clients that do not have a system like this. Cons Users can repeatedly start then stop the script in order to gain access to another rating. Solved by only allowing a TRiBot user to rate the same script once every 12 hours of real time. Users new to a script that can't figure out how to use it in the first 5 minutes of use will rate it poorly. Solved by preventing users from rating a script until they've run it for at least 6 hours of script runtime (excluding pauses and breaks). User bans will influence user ratings. Keep in mind that bans are a part of the user experience, which is what the rating system aims to measure. High banrate scripts will get lower ratings because of: User error: poor botting practices leads to users blaming the script. This is a universal constant and applies equally to all scripts, so it doesn't affect the comparison process. Overall OSRS botting banrate: another universal constant. Method-specific banrate: some botting methods are more closely monitored than others leading to an unfair rating due to bans. When users compare scripts of the same method, this will be a constant across all those scripts, so it makes no difference. When users compare scripts with diverse methods, the rating they see will not reflect the true rating of the script. This means users will gravitate to lower banrate scripts, which isn't objectively fair, but will improve overall TRiBot user experience. Script-specific banrate: scripts with poor antiban or botlike patterns will be rated worse than others. Good, that's actually the way it should be. Scripters that choose to write scripts for high banrate skills will be seen as worse scripters than they really are. There are two options here: Simply avoid the methods that are known to have higher banrates. Unfortunately doesn't apply to existing scripts. Work hard to develop a stringer antiban implementation for the scripts that need it. If you do this properly: Your script will stand out as the only good one in a category of otherwise poorly rated scripts. You will gain nearly all the market share for that category. You can now charge more for the script since the only other options are poorly rated ones. You'll drive innovation in the field of antiban if you are successful here. The implementation of a system like this will take lots of time and resources. It is what it is. A rating system like this will improve TRiBot significantly, but it doesn't come without costs. Overall, it's not a perfect system, but there are workarounds to many of the issues that do come up. If you can think of any more pros and cons, or better solutions to existing cons, please post them and I'll update the OP. Overall, to me, it seems like the benefits outweigh the disadvantages, most of which are either minimal when you analyze them, or have simple solutions. In the end, the Repository does need some revitalization, and this would significantly improve it and the TRiBot user experience.
  20. Can you post a snippet of the code you're using to open the .zip file in the OP? That will probably be diagnostically relevant. @warrbrown
  21. The solution is almost certainly script-specific, so without knowledge of how the script works, the best I can do is point you in the right direction. I'll need to see the stack trace, which is the error message posted by the script when it runs out of memory and crashes.
  22. Do you get an OutOfMemory exception at any point? If so, it's possible that the script you're running has a memory leak and is gradually eating away at the heap space causing lag and eventually that exception I mentioned.
×
×
  • Create New...