![Paweł Hałdrzyński Profile](https://pbs.twimg.com/profile_images/1194704414029864966/UYZkaGoL_x96.jpg)
Paweł Hałdrzyński
@phaldrzynski
Followers
646
Following
15
Statuses
172
Researching web applications' security at daylight - auditing smart contracts at night
Poland
Joined November 2019
For the 2nd year in a row, my research was chosen for 'Top 10 web hacking techniques'. It's very encouraging that my 'WAF evasion techniques' is among other awesome researches and that I'm able to share my security thoughts with the #infosec community!
1
10
32
Not only was it my first Live Hacking Event, but also the first time I have ever been on stage! I was nominated to do a Show and Tell and talked about the vuln. I had found during #AmbassadorWorldCup #AWC2024 Elite Eight round. It was nice to meet so many amazing, skilled people, in this beautiful (but cold) city of Prague!
What a way to finish the Elite Eight round! 💪 Each of these amazing teams' incredible work over the last 11 days is something to be extremely proud of. On behalf of the entire HackerOne team and our #AmbassadorWorldCup partners @ASWatsonGroup and @okx--- THANK YOU! 🙌 Stay tuned to see which teams advance to the next round 🔜.
0
0
5
Cannot agree with this. If you want professionals, then you either open an invite-only program and spend time on selectively choosing the right bug-hunters for your target, or you request a pentest/vulnerability assessment. You can't have your cake and eat it too. Open bug bounty program means that more eyes are willing to look for vulnerabilities in your target - but, some of the reports will always be spam/low quality.
@DKidolle This is a platform to connect professionals, not a beginner playground. Locking accounts of low quality researchers for a few months + giving them some resources to learn is much better to let them consider the platform as their training resource.
0
0
0
@IceSolst @LiveOverflow Not to mention, that alert('XSS') is actually very ineffective way to test for XSSes. You're gonna find a lot of webapps, which do not sanitize <> characters, but do escape/encode an apostrophe. You might miss a lot of XSSes, 'cos your alert box simply won't pop up.
1
0
1
@BwE_Dev @LiveOverflow Nah, because nowadays, most of the webapps uses httpOnly flag on their session cookie. This means, that document.cookie won't be accessible via JS and your alert-box's gonna be empty.
1
0
0
Większość paywalli jest SEO-friendly. Treści muszą się jakoś indeksować (inaczej, konkurencja bez kłódki na tekstach, występowałaby wyżej w wynikach wyszukiwarek). Sporo paywalli to zwykłe nakładki na UI (np. dodatkowa warstwa w CSS, zasłaniająca główną część artykułu). Wtedy, o ile w samej przeglądarce taka treść jest rzeczywiście zasłonięta - w samym kodzie strony już nie. AI nie renderuje stron, a pobiera cały jej kod (i z niego wyciąga tekst). Jeszcze inne paywalle, wyłączają się, jeśli rozpoznają, że stronę odwiedza bot wyszukiwarki (wtedy, np. Googlebot widzi całą treść - bez paywalla, i może ją w całości zaindeksować). Podobnie boty AI. Wykrycie, że stronę odwiedza dany bot, i zablokowanie mu dostępu do całości artykułu jest stosunkowo proste.
0
0
0
@philbugcatcher Not only the sleep's duration - its quality is even more important. Understanding the sleep cycles and examining own sleep patterns is super important to improve the latter.
0
0
0
@existencebt @lanimwar Nah, the month is first, because it's the most common way of reading the date: "It's April 2nd" (more common to hear that) vs "The 2nd of April".
2
0
0
Good prompting can reduce the number of vulns. produced by LLMs, but won't generate 100% bug-free code. There is very cool research paper: "Prompting Techniques for Secure Code Generation: A Systematic Investigation" - which demonstrates, that even CWE-specific prompting (basically telling AI which classes of vulns it should have in mind while generating a code) - produced vulnerable code.
1
0
1
Not only this. Hallucinations are even bigger concern. 1. AI hallucinates package name which does not exist. 2. Attacker creates that package with malicious code 3. Voilà - you have a backdoored code copy-pasted from LLMs
As a hacker I am thrilled to see how often LLMs produce vulnerable code. 👏 As someone who cares about cybersecurity, I'm low-key terrified 😬
0
0
1
If a design choice leads to total loss of funds (as you said) - then there's a vulnerability at the design level. It happens sometimes, that devs implement new feature and do not realize that this feature may have some security consequences. It's auditor/SR job to explain to the Customer, that their design choice is insecure and impacts the integrity of the protocol.
0
0
1
@AliX__40 That sounds a lot. Joking aside - whenever you'll decide to limit/quit - please remember, do not quit caffeine cold turkey. It's better to gradually reducing the amount of it
1
0
1
This was super fun discovery. While it seems to be already fixed, they are tons of other ways to get the internal prompt. You can ask to reveal the first Nth characters of the prompt, translate the prompt or even tl;dr; it (in case, when the input is limited). Examples are in my previous tweets! #AI
0
0
1
@vxunderground @mitchelldeamon @nyaathea It's still possible to get those via chat directly (instead of user name). You can ask to reveal X characters of the prompt, or to translate the prompt, or to tl;rd; prompt (I've put some examples in my latest tweets)
0
0
1