Report this content
We want the Dreams coMmunity to be a safe, diverse and tolerant place for everyone, no matter their age, gender, race, sexual orientation or otherwise. If you believe this content to contradict these principles, you can file a report for our coMmunity teams to investigate.
Note that misuse of the reporting tool will not be tolerated.Item being reported:
I'll try with a concrete example, hoping it'll make sense.
Let's say you're making a game like Space Harrier: https://www.youtube.com/watch?v=Hzgrb-mjLaM&t=77s
- You want to have different cameras with different angle/zoom
- You want the puppet to move to the edges of the screen.
- You also want to add a crosshair, that it should move to the screen edges as well, separated from the player.
The basic idea is that the engine already knows everything about your screen: where it "ends", what's in it etc. So it would make sense to just retrieve that data, and use it for character/HUD/enemy movement.
Right now it can be 100% done, but it's pretty cumbersome because you have to basically create a whole setup just to extract the data that the engine already knows, but doesn't disclose.
Spacer Harrier is also a fairly easy setup, maybe you want it for a more 3D environment with multiple dynamic cameras. That's even more complex, and Dreams is a massive time-sink already.
Having inputs/outputs relative to the screen-space would bypass a lot of the grunt work needed to extract the data (the on-screen XY coordinates), and go straight to actually working with that instead.
The position on the Z-Axis can be adjusted separately, i don't think it would be a big problem. A tag+follower on Z-axis would be enough.
The problem is to A) detect things on screen, B) get and edit their coordinates relative to the screen, C) Do it without complex setups, getting the data the engine already has.
Hoping this makes sense.